My Microsoft MVP Award gift package is here!

A little over a week after becoming a Microsoft MVP, I’ve now received my MVP Award gift package!

So i’m a little excited today.  I’ve been tracking this package all the way from Redmond to Durham, United Kingdom, and wanted to show you exactly what comes with the MVP Award package.

All in all, a fantastic unboxing experience. It’s a really nice pack and it’s nice to have something physical to represent the award. 

BTW, here’s my official MS MVP URL

Hope to see you all at the MVP Summit!

FedEX box marked Fragile (Mailed from Redmond)
First sight of the MVP box
2020-2021 MVP Award Box
Certification Plaque from Microsoft with Satya Nadella’s signature
A message from the MVP Award Team, MVP Identification Card and a MVP Lapel Pin
An inner box, surrounded by packing foam. Wonder what could be in there?
Lots of MVP logo stickers and some important information.
And the MVP Trophy, with a ring for each year you’re an MVP.

PowerShell Script for Adding Partner Admin Link (PAL)

partner Admin Link (PAL) Overview


Microsoft partners provide services that help customers achieve business and mission objectives using Microsoft products. When acting on behalf of the customer managing, configuring, and supporting Azure services, the partner users will need access to the customer’s environment. Using Partner Admin Link, partners can associate their partner network ID with the credentials used for service delivery. Microsoft wants to recognize your influence on Azure consumption to deepen the partnership, build your business, and highlight your expertise.

Get Recognized for Driving Azure Consumption (Microsoft Video)

Ways To Setup Partner Admin Link (PAL)

Can be setup via:

  • The Azure Portal
  • PowerShell
  • CLI

Further documentation

Best Practice
  • Activate Partner Admin Link whenever possible (in both incentive and non-incentive scenarios) as this maximises the demonstration of influence on customer Azure consumption.
  • PAL is not retrospective, so it is best to do this on day 1 of an engagement.
  • Automate where possible (i.e. PowerShell or CLI), Azure Portal as the fallback.
  • Link all accounts that have access to customer resources (as some accounts may have different scopes of permissions, some accounts may drop off over time, etc).
  • Use a Location or HQ based MPN ID (not Virtual Org).

PAL is linked on a per user, per tenant basis.

The bit you came here for (powershell script)!

At DevOpsGroup we need to ensure that each time we begin a new Azure engagement we are linking our MPN ID. As this should happen the first time we are logging into a customers environment for every user, we wanted to make it as simple as possible.

I created this script so you are guided through the process in a simple, user friendly manner and it validates that the ID has been set / changed.

As a quick run through the code does the following:

  • Creating Log File & Start Transcript
  • Check Required Modules Are Installed, if not, install them
  • Connect To Azure Account
  • Collect Tenant ID (You are presented with a grid view selector of Tenant ID’s you have access to)
  • Collect New MPN Partner ID (Just press enter if you add your default as explained below)
  • Validation of existing MPN ID (If it exists) and confirmation if you want to update it
  • Validation of new MPN ID if new one is set / changed
The code is mostly self explanatory and has been commented. However, you may want to change this value:
$defaultValue = “1234567”

I would have this set to the DevOpsGroup MPN ID, so when it asks you for your ID, you can just hit enter to accept the above default. While this is the default, it still gives you the flexibility to enter a different ID and use that instead.

Big thanks to Bob Larkin for his QC work and suggesting the gridview!

Hope you find this useful, please feel free to fork and use!







Enable logging to Log Analytics Workspace on Azure SQL Managed Instances using PowerShell

Enable logging on Azure SQL Managed Instances using PowerShell


Just a quick post to share an easy way to enable logging on your Azure SQL managed Instances using a simple PowerShell script:

#Log analytics workspace resource group name
$OMSRG = “prd-rg01”
#Log analytics workspace name
$OMSWSName = “log-prd01”
#Get Log analytics workspace ID
$WS = Get-AzureRmOperationalInsightsWorkspace
-ResourceGroupName $OMSRG -Name $OMSWSName
$WSId = $WS.ResourceId
$SQLInstanceNAme = “prd-msql01”
$RGName = “prd-rg01”
#Get SQL managed instace server
$SQLMI = Get-AzureRmSqlInstance -Name $SQLInstanceNAme -ResourceGroupName $RGName
$sqlserverdiagname = $SQLInstanceNAme+”-diag”
#Enable diagnositic settings to Log analytics for the SQL instance.
Set-AzureRmDiagnosticSetting -ResourceId $SQLMIID -WorkspaceId $WSId -Enabled $true -Name $sqlserverdiagname
#Get Managed SQL instance DB names
$SQLManagedInstanceDBS = Get-AzureRmSqlInstanceDatabase -InstanceName $SQLInstanceNAme -ResourceGroupName $RGName
#iterate through each DB to enable logging to the log analytics workspace.
foreach ($db in $SQLManagedInstanceDBS)
Set-AzureRmDiagnosticSetting-ResourceId $SQLMIDBID-WorkspaceId $WSId-Enabled $true-Name $diagname
#It can take a while for the portal to show the config change to check this with PS just run this commnad
#Add the resource ID of Managed instance server or DB and it will show you what is enabled and the workspace it is configured to use.
#Get-AzDiagnosticSetting -ResourceId

Automating Windows environments setup with Boxstarter and Chocolatey packages

Chocolatey is command line package manager for Windows that gives you a very Linux -esque software installation experience. This guide expects that you already are using Chocolatey, but in case you need convincing here’s what makes it so awesome, for example: choco install googlechrome will install Google Chrome on your computer without having to wait for the installer. You can even get fancy and list as many packages as you would like with a -y flag to automatically accept any prompts: choco install -y azcopy firefox awscli Can’t undersell how easy this makes to set a computer up for the first time.

Boxstarter uses Chocolatey packages but adds a few extra tools that allow you to install software faster and make changes to Windows settings. Boxstarter has some amazing functionality that I am not going to touch on here, but I would recommend Checking out their Docs.

Boxstarter  is now being managed by Chocolatey. still exists, but the source repository is now under the Chocolatey org on Github.

Microsoft are contributing Boxstarter scripts in a new Github repo –

If you’re looking to use Boxstarter to automate the software installation of your Windows machines, there’s a few tricks and traps worth knowing about. The below sections came from the awesome David Gardiner and the comments to the issue he raised in the repo:

Avoid MAXPATH errors

It’s worth understanding that Boxstarter embeds its own copy of Chocolatey and uses that rather than choco.exe. Due to some compatibility issues Boxstarter currently needs to embed an older version of Chocolatey. That particular version does have one known bug where the temp directory Chocolatey uses to download binaries goes one directory deeper each install. Not a problem in isolation, but when you’re installing a lot of packages all at once, you soon hit the old Windows MAXPATH limit.
A workaround is described in the bug report – essentially using the --cache-locationargument to override where downloads are saved. The trick here is that you need to use this on all choco calls in your Boxstarter script – even for things like choco pin. Forget those and you still may experience the MAXPATH problem.

To make it easier, I add the following lines to the top of my Boxstarter scripts:

$ChocoCachePath = “C:\Temp”
New-Item -Path $ChocoCachePath -ItemType directory -Force

And then I can just append –cacheLocation $ChocoCachePath to each choco statement. eg.

cup docker-desktop –cacheLocation $ChocoCachePath
cup docker-compose –cacheLocation $ChocoCachePath
cup minikube –cacheLocation $ChocoCachePath

Avoid unexpected reboots

Detecting and handling reboots is one of the great things about Boxstarter. You can read more in the docs, but one thing to keep in mind is it isn’t perfect. If a reboot is initiated without Boxstarter being aware of it, then it can’t do its thing to restart and continue.

One command I’ve found that can cause this is using Enable-WindowsOptionalFeature. If the feature you’re turning on needs a restart, then Boxstarter won’t resume afterwards. The workaround here is to leverage Chocolatey’s support for the windowsfeatures source. So instead of this

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-All

Do this

cinst Microsoft-Hyper-V-All -source windowsfeatures

If you have a more intricate Boxstarter script, you may run into some problems that you need to diagnose. Don’t look in the usual Chocolatey.log as you won’t see anything there. Boxstarter logs all output to its own log, which by default ends up in $env:LocalAppData\Boxstarter\Boxstarter.log. This becomes even more useful when you consider that Boxstarter may automatically restart your machine multiple times, so having a persistent record of what happened is invaluable.
The other things you might want to make use of is Boxstarter-specific commands like Write-BoxstarterMessage (which writes to the log file as well as the console output) and Log-BoxstarterMessage (which just write to the log file)

Find out more about these and other logging commands by running help about_boxstarter_logging.

My scripts

I keep my Boxstarter scripts at Feel free to have a look and fork them if they look useful.

how to use

Whenever I need to build a new laptop, I just run the following 2 commands:

. { iwr -useb } | iex; get-boxstarter -Force

The above installs BoxStarter

Followed by this in an elevated PowerShell command prompt:

Install-BoxstarterPackage -PackageName -DisableReboots

This downloads the file from the gist and begins installing everything

Hope this was helpful!

Any questions just get in touch via Twitter

Demo of – Awesome Observability
 provides real-time system debugging, distributed tracing for microservices,and logging, alerting, dashboards, and observability for services. Architecting, planning, configuring, deploying, and maintaining an internal equivalent would cost orders of magnitude more than using Honeycomb’s service.

I wanted to improve the visibility of our Azure hosted platform and thought I would give Honeycomb a go.

I decided to get something simple setup and managed to go from having no account to being able to search data from our NGINX server in much less than 5 minutes which I thought was pretty impressive!

I have a CentOS VM hosted in Azure which runs NGINX, I wanted to be able to search through its logs without having to SSH into the server.

I decided to setup a Free Trial of Honeycomb which is just a matter of simply signing up with your work email and verifying that address.

Once confirmed I was presented with options of what data I wanted to ingest:

For this, I chose NGINX and was presented with instructions of how to install the service on my server:

The above install instructions installed a binary file but I found I got a “honeytail – command not found” error. So I used the following instead (which I found here):

wget -q && \
echo ‘bd135df2accd04d37df31aa4f83bd70227666690143829e2866de8086a1491d2 honeytail_1.733_amd64.deb’ | sha256sum -c && \
sudo dpkg -i honeytail_1.733_amd64.deb

Once installed, the next step should run honeytail and send the data into your honeycomb account.

The first error I got using the instructions above was “permission denied” so I had to use it using “sudo” – This will obviously depend on your setup

The next error complained about it missing the required option “–nginx.conf=”

The final error I got was “log_format” missing in given config. You can find this in your nginx.conf, mine was “upstreamlog” but again this will depend on your environment.

In the end, the command I ran was the below:

sudo honeytail –parser=nginx \
–writekey=xxxxxx1111111111xxxxxxxxxxx \
–nginx.conf=/etc/nginx/nginx.conf \
–dataset=”NGINX Logs” \
–nginx.format=upstreamlog \
–file=/var/log/nginx/access.log –backfill

Now this server doesn’t get much traffic so there wasn’t loads of data but within about a second the data was in my honeycomb account and I had an email to confirm this.
















I liked how it had parsed it and displayed the schema of your data:

From here you can query any of your data in real-time. It really lets you ask questions on the fly that look more like business intelligence queries, over arbitrary keys and values.

The nearest product I have used like this is probably Sumo Logic, which is similar in the real-time query and sharing these queries amongst teams etc.

However, I have to say Honeycomb was far better!

It just felt like it was built by people who care and I am sure it will continue to grow as we try to get away from traditional “monitoring” metrics.

I was amazed by how quick I managed to get this simple demo configured. Even with a couple of issues (Which I am sure were more to do with me / my environment, than Honeycomb) it still took less than 5 minutes from start to finish!

Grab yourself a Demo account here and have a go yourself.

Hope this was helpful!

Any questions just get in touch via Twitter

Also, from the Honeycomb side – CEO Charity Majors


Deploy openfaas on Azure AKS using Helm

openfaas (Functions as a Service) is a framework for building serverless functions with Docker and Kubernetes which has first class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.

I was impressed that I managed to get this all setup and working in less than 10 minutes (If we forget that AKS took almost 20 minutes to provision!)


In order to complete the steps within this article, you need a basic understanding of Kubernetes & the following.

  • An Azure subscription
  • A working installation of Kubectl (tutorial here)
  • A working installation of Helm (see here)
  • Install the openfaas CLI. See the openfaas CLI documentation for options.
  • Azure CLI installed on your development system.
  • Git command-line tools installed on your system.
Deploy the cluster with the Azure CLI


1) Install the official latest version of Azure CLI

2) Login to your subscription:

az login

Optional: If you have multiple subscriptions linked to your account, remember to select the one on which you want to work. (az account set -s subscription-id)

3) Create the resource group in which you want to deploy the cluster (in the example ghostinthewiresk8sRG is the name of the Resource Group and westeurope is the chosen location):

az group create -l westeurope -n ghostinthewiresk8sRG

4) Finally, create your cluster. This will create a default cluster with one master and three agents (each VM is sized, by default, as a Standard_D2_v2 with 2vCPUs and 7GiB of RAM):

az acs create –orchestrator-type Kubernetes -g ghostinthewiresk8sRG -n k8sCluster -l westeurope –generate-ssh-keys

Optional: you can specify the agent-count, the agent-vm-size and a dns-prefix for your cluster:

–agent-count 2 –agent-vm-size Standard_A1_v2 –dns-prefix k8sghost

5) Get your cluster credentials ready for kubectl:

az acs kubernetes get-credentials -n k8sCluster -g ghostinthewiresk8sRG



1) Check if the kubectl configuration is ok and if your cluster is up-and-running:

kubectl cluster-info



Deploy OpenFaaS to the newly created AKS Cluster

Please follow the instructions as per the official docs with Helm. Please ensure you use the basic auth as outlined in the steps.

validate openfaas install

A public IP address is created for accessing the openfaas gateway. To retrieve this IP address, use the kubectl get service command. It may take a few minutes for the IP address to be assigned to the service, until then it will show as pending:

kubectl get service -l component=gateway –namespace openfaas



To test the openfaas system, browse to the external IP address on port 8080, in this example:










Create first function

Now that openfaas is operational, you could create a function using the OpenFaaS portal but I will show you how to do it via the CLI.

1) In order to see what Functions are available in the store type:

faas-cli store list

2) We are going to use Figlet to Generate ascii logos through the use of a binary. To install run the following:

faas-cli store deploy figlet –gateway

Use curl to invoke the function. Replace the IP address in the following example with that of your openfaas gateway:

curl -X POST -d “ghostinthewire5”

If you made it this far and now have a working deployment of openfaas on AKS — congratulations! Try out a bunch of functions from the store, or use the openfaas CLI Tool to build your own functions and deploy them.

Hope this was helpful!

I was amazed by how quick I managed to get this simple demo configured. I only have a little experience of Kubernetes and was able to get this working in less than 10 minutes (When you consider AKS took almost double the time to provision), which is awesome considering all the complexity this abstracts away from you.

Another big part of this demo that hasn’t been mentioned is the incredible support I received from the Founder Alex Ellis and the very active OpenFaaS community.

For help with OpenFaaS please visit the OpenFaaS community sign-up page.

Create a build pipeline for Angular apps in Azure DevOps

I wanted to show you how I created a Build Pipeline for an Angular App in Azure DevOps.

As always I have this as a task group so I can reuse across projects.

My Task Group for the build is comprised of 5 steps, including tagging the build using PowerShell:









My Install Node step looks like this, which also add it to the PATH:








My NPM install step looks like this:









My NPM Run Build Script Step looks like this:










This calls the script located in my package.json file as below:








I now tag the build using a PowerShell script, this obviously isn’t required but thought I would show it as might be useful for some of you:












Finally I publish the build artifact:










I hope this shows how easy it is to use Azure DevOps Build Pipelines to build an Angular application with an added bonus of tagging using PowerShell.

You could then have a Release Pipeline use the artifact to deploy to an Azure WebApp or wherever else you wanted.

Hope this was helpful!

Any questions just get in touch via Twitter

Use Git Short Hash in Azure DevOps Release Pipeline


I wanted to include the Git Short Hash in the release name of my Azure DevOps Release Pipeline.

I am sure there may be other ways to do this but wanted to show how I did it using PowerShell.

As always I have this as a task group so I can reuse across projects. However, it only has 1 step:





The PowerShell within this step looks like this:

$commitId= “$env:BUILD_BUILDNUMBER”

$definitionName= “1.0.0-“

$deploymentId = “$env:RELEASE_DEPLOYMENTID”

$releaseName=$definitionName+$commitId + -$deploymentId

Write-Host (“##vso[release.updatereleasename]$releaseName”)








One issue I have found with this is that this obviously only updates once the release has been successfully deployed:




This is because it runs as part of the Release Pipeline. Here is a view of the logs to see it in action:




I figured out the command required to update the release name from these Microsoft Docs at the very bottom under “Release Logging Commands”

Hope you find this post useful and it will help you to build your Infrastructure in Azure using Azure Devops with custom Release Names.

Any questions just get in touch via Twitter

Managing Database Schemas using Azure DevOps


A data model changes during development and gets out of sync with the database. You can drop the database and let Entity Framework create a new one that matches the model, but this procedure results in the loss of data. The migrations feature in EF Core provides a way to incrementally update the database schema to keep it in sync with the application’s data model while preserving existing data in the database.

I am using a Task Group for this to keep it as general as possible to allow it ot be used across multiple projects.

My Task Group for the build is comprised of 3 steps, a restore of the solution, the build of the migrations and then a publish of the artifact:

My NuGet Restore step looks like this and also uses the Azure DevOps Artifacts Feed:

My Build EF Core Migrations step looks like this, more info can be found on these scripts here:

ef migrations script -v -i -o $(build.artifactstagingdirectory)\Migrations\$(Build.DefinitionName).sql –startup-project ../$(Build.DefinitionName).API

The final step takes the output of the previous step and publishes it to Azure Pipelines (Artifact Feed):

I use this artifact within a Release Pipeline after I deploy my Web App:

The settings for this look like this:

To give you an idea of the structure, the linked artifacts look like this. This has the app code and the sql script generated in our steps above in seperate folders. (The Deploy Azure App Service step above would just look at the “platform” folder:






Hope you find this post useful and it will help you to build your Infrastructure in Azure using Azure Devops & Entity Framework Core Migrations.

Any questions just get in touch via Twitter

Using Terraform in Azure DevOps Task Group

I am using Terraform to build my Infrastructure in Azure DevOps using the Task Group feature to keep it generalised. To do this I am using the Terraform Extension from Peter Groenewegen and remote state in Azure Blob Storage.

Thought I would share my setup in the hope that it would be useful for others.

My Task Group is comprised of 2 steps, the Plan & the Apply:

The first part of my Plan setup looks like the below:


In the Terraform template path I have used a mixture of built-in system variables and a custom parameter that can be defined in your pipeline


The system parameters can be found here

In the Terraform arguments section I have entered the path to my environment var file. Again I have used the system variable $(Release.EnvironmentName) and ensured that the folder within my Terraform Repo is named exactly the same, along with the tfvars file. This ensures you can keep the Task Group generalised throughout the pipeline.

plan -var-file=”./environments/$(Release.EnvironmentName)/$(Release.EnvironmentName).tfvars”

I have ticked the Install Terraform box to ensure Terraform is installed on the Build Agent.

I have also ticked the Use Azure Service Principal Endpoint Box as per the Terraform best practice guidelines.

The last part of my Plan setup looks like the below:

In this section I declared that I went the state initialised in Blob storage and give the details of where I want this to be stored. Again keeping it generalised by using the system variables.

Within my Terraform code, the file has this at the start so it knows to use the remote state:





The structure of my Terraform is as below, with the backend.tfvars sitting alongside my $environment.tfvars file. This means you could have completely different settings for each remote state if you wanted. I have it this way as devci1 & qa1 sit in a different subscription to uat1 & prod, so use different blob storage. I also seperate each state into different containers within the blob storage to keep it organised as you will see in the next screenshot.









The backend.tfvars file looks like the below. The main thing to note is that I change the container name to match the environment (I have changed this to be hard coded for this post but would usually just use the variable to pass the value in to avoid mistakes)

The Terraform Apply section is exactly the same apart from the Terraform arguments section. It has Apply rather than Plan and most importantly the -auto-approve means it doesn’t require any human intervention.

apply -var-file=”./environments/$(Release.EnvironmentName)/$(Release.EnvironmentName).tfvars” -auto-approve 

Hope you find this post useful and it will help you to build your Infrastructure in Azure using Azure Devops & Terraform.

Any questions just get in touch via Twitter