PowerShell Script for Adding Partner Admin Link (PAL)

partner Admin Link (PAL) Overview

 

Microsoft partners provide services that help customers achieve business and mission objectives using Microsoft products. When acting on behalf of the customer managing, configuring, and supporting Azure services, the partner users will need access to the customer’s environment. Using Partner Admin Link, partners can associate their partner network ID with the credentials used for service delivery. Microsoft wants to recognize your influence on Azure consumption to deepen the partnership, build your business, and highlight your expertise.

Get Recognized for Driving Azure Consumption (Microsoft Video)

Ways To Setup Partner Admin Link (PAL)

Can be setup via:

  • The Azure Portal
  • PowerShell
  • CLI

Further documentation

Best Practice
  • Activate Partner Admin Link whenever possible (in both incentive and non-incentive scenarios) as this maximises the demonstration of influence on customer Azure consumption.
  • PAL is not retrospective, so it is best to do this on day 1 of an engagement.
  • Automate where possible (i.e. PowerShell or CLI), Azure Portal as the fallback.
  • Link all accounts that have access to customer resources (as some accounts may have different scopes of permissions, some accounts may drop off over time, etc).
  • Use a Location or HQ based MPN ID (not Virtual Org).
Important

PAL is linked on a per user, per tenant basis.

The bit you came here for (powershell script)!

At DevOpsGroup we need to ensure that each time we begin a new Azure engagement we are linking our MPN ID. As this should happen the first time we are logging into a customers environment for every user, we wanted to make it as simple as possible.

I created this script so you are guided through the process in a simple, user friendly manner and it validates that the ID has been set / changed.

As a quick run through the code does the following:

  • Creating Log File & Start Transcript
  • Check Required Modules Are Installed, if not, install them
  • Connect To Azure Account
  • Collect Tenant ID (You are presented with a grid view selector of Tenant ID’s you have access to)
  • Collect New MPN Partner ID (Just press enter if you add your default as explained below)
  • Validation of existing MPN ID (If it exists) and confirmation if you want to update it
  • Validation of new MPN ID if new one is set / changed
The code is mostly self explanatory and has been commented. However, you may want to change this value:
$defaultValue = “1234567”

I would have this set to the DevOpsGroup MPN ID, so when it asks you for your ID, you can just hit enter to accept the above default. While this is the default, it still gives you the flexibility to enter a different ID and use that instead.

Big thanks to Bob Larkin for his QC work and suggesting the gridview!

Hope you find this useful, please feel free to fork and use!

GET YOUR SCRIPT HERE!

 

 

 

 

 

Please follow and like us:

Enable logging to Log Analytics Workspace on Azure SQL Managed Instances using PowerShell

Enable logging on Azure SQL Managed Instances using PowerShell

 

Just a quick post to share an easy way to enable logging on your Azure SQL managed Instances using a simple PowerShell script:

#Log analytics workspace resource group name
$OMSRG = “prd-rg01”
#Log analytics workspace name
$OMSWSName = “log-prd01”
#Get Log analytics workspace ID
$WS = Get-AzureRmOperationalInsightsWorkspace
-ResourceGroupName $OMSRG -Name $OMSWSName
$WSId = $WS.ResourceId
$SQLInstanceNAme = “prd-msql01”
$RGName = “prd-rg01”
#Get SQL managed instace server
$SQLMI = Get-AzureRmSqlInstance -Name $SQLInstanceNAme -ResourceGroupName $RGName
$SQLMIID = $SQLMI.Id
$sqlserverdiagname = $SQLInstanceNAme+”-diag”
#Enable diagnositic settings to Log analytics for the SQL instance.
Set-AzureRmDiagnosticSetting -ResourceId $SQLMIID -WorkspaceId $WSId -Enabled $true -Name $sqlserverdiagname
#Get Managed SQL instance DB names
$SQLManagedInstanceDBS = Get-AzureRmSqlInstanceDatabase -InstanceName $SQLInstanceNAme -ResourceGroupName $RGName
#iterate through each DB to enable logging to the log analytics workspace.
foreach ($db in $SQLManagedInstanceDBS)
{
$SQLMIDBID=$db.Id
$diagname=$db.name+”-diag”
$SQLMIDBID
$diagname
Set-AzureRmDiagnosticSetting-ResourceId $SQLMIDBID-WorkspaceId $WSId-Enabled $true-Name $diagname
}
#It can take a while for the portal to show the config change to check this with PS just run this commnad
#Add the resource ID of Managed instance server or DB and it will show you what is enabled and the workspace it is configured to use.
#Get-AzDiagnosticSetting -ResourceId
Please follow and like us:

Demo of Honeycomb.io – Awesome Observability

 

 

 

Honeycomb.io provides real-time system debugging, distributed tracing for microservices,and logging, alerting, dashboards, and observability for services. Architecting, planning, configuring, deploying, and maintaining an internal equivalent would cost orders of magnitude more than using Honeycomb’s service.

I wanted to improve the visibility of our Azure hosted platform and thought I would give Honeycomb a go.

I decided to get something simple setup and managed to go from having no account to being able to search data from our NGINX server in much less than 5 minutes which I thought was pretty impressive!

I have a CentOS VM hosted in Azure which runs NGINX, I wanted to be able to search through its logs without having to SSH into the server.

I decided to setup a Free Trial of Honeycomb which is just a matter of simply signing up with your work email and verifying that address.

Once confirmed I was presented with options of what data I wanted to ingest:

For this, I chose NGINX and was presented with instructions of how to install the service on my server:

The above install instructions installed a binary file but I found I got a “honeytail – command not found” error. So I used the following instead (which I found here):

wget -q https://honeycomb.io/download/honeytail/linux/honeytail_1.733_amd64.deb && \
echo ‘bd135df2accd04d37df31aa4f83bd70227666690143829e2866de8086a1491d2 honeytail_1.733_amd64.deb’ | sha256sum -c && \
sudo dpkg -i honeytail_1.733_amd64.deb

Once installed, the next step should run honeytail and send the data into your honeycomb account.

The first error I got using the instructions above was “permission denied” so I had to use it using “sudo” – This will obviously depend on your setup

The next error complained about it missing the required option “–nginx.conf=”

The final error I got was “log_format” missing in given config. You can find this in your nginx.conf, mine was “upstreamlog” but again this will depend on your environment.

In the end, the command I ran was the below:

sudo honeytail –parser=nginx \
–writekey=xxxxxx1111111111xxxxxxxxxxx \
–nginx.conf=/etc/nginx/nginx.conf \
–dataset=”NGINX Logs” \
–nginx.format=upstreamlog \
–file=/var/log/nginx/access.log –backfill

Now this server doesn’t get much traffic so there wasn’t loads of data but within about a second the data was in my honeycomb account and I had an email to confirm this.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

I liked how it had parsed it and displayed the schema of your data:

From here you can query any of your data in real-time. It really lets you ask questions on the fly that look more like business intelligence queries, over arbitrary keys and values.

The nearest product I have used like this is probably Sumo Logic, which is similar in the real-time query and sharing these queries amongst teams etc.

However, I have to say Honeycomb was far better!

It just felt like it was built by people who care and I am sure it will continue to grow as we try to get away from traditional “monitoring” metrics.

I was amazed by how quick I managed to get this simple demo configured. Even with a couple of issues (Which I am sure were more to do with me / my environment, than Honeycomb) it still took less than 5 minutes from start to finish!

Grab yourself a Demo account here and have a go yourself.

Hope this was helpful!

Any questions just get in touch via Twitter

Also, from the Honeycomb side – CEO Charity Majors

 

Please follow and like us:

Deploy openfaas on Azure AKS using Helm

openfaas (Functions as a Service) is a framework for building serverless functions with Docker and Kubernetes which has first class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.

I was impressed that I managed to get this all setup and working in less than 10 minutes (If we forget that AKS took almost 20 minutes to provision!)

Prerequisites

In order to complete the steps within this article, you need a basic understanding of Kubernetes & the following.

  • An Azure subscription
  • A working installation of Kubectl (tutorial here)
  • A working installation of Helm (see here)
  • Install the openfaas CLI. See the openfaas CLI documentation for options.
  • Azure CLI installed on your development system.
  • Git command-line tools installed on your system.
Deploy the cluster with the Azure CLI

 

1) Install the official latest version of Azure CLI

2) Login to your subscription:

az login

Optional: If you have multiple subscriptions linked to your account, remember to select the one on which you want to work. (az account set -s subscription-id)

3) Create the resource group in which you want to deploy the cluster (in the example ghostinthewiresk8sRG is the name of the Resource Group and westeurope is the chosen location):

az group create -l westeurope -n ghostinthewiresk8sRG

4) Finally, create your cluster. This will create a default cluster with one master and three agents (each VM is sized, by default, as a Standard_D2_v2 with 2vCPUs and 7GiB of RAM):

az acs create –orchestrator-type Kubernetes -g ghostinthewiresk8sRG -n k8sCluster -l westeurope –generate-ssh-keys

Optional: you can specify the agent-count, the agent-vm-size and a dns-prefix for your cluster:

–agent-count 2 –agent-vm-size Standard_A1_v2 –dns-prefix k8sghost

5) Get your cluster credentials ready for kubectl:

az acs kubernetes get-credentials -n k8sCluster -g ghostinthewiresk8sRG

GET OPENFAAS

 

1) Check if the kubectl configuration is ok and if your cluster is up-and-running:

kubectl cluster-info

 

 

Deploy OpenFaaS to the newly created AKS Cluster

Please follow the instructions as per the official docs with Helm. Please ensure you use the basic auth as outlined in the steps.

validate openfaas install

A public IP address is created for accessing the openfaas gateway. To retrieve this IP address, use the kubectl get service command. It may take a few minutes for the IP address to be assigned to the service, until then it will show as pending:

kubectl get service -l component=gateway –namespace openfaas

 

 

To test the openfaas system, browse to the external IP address on port 8080, http://13.69.84.16:8080 in this example:

 

 

 

 

 

 

 

 

 

Create first function

Now that openfaas is operational, you could create a function using the OpenFaaS portal but I will show you how to do it via the CLI.

1) In order to see what Functions are available in the store type:

faas-cli store list

2) We are going to use Figlet to Generate ascii logos through the use of a binary. To install run the following:

faas-cli store deploy figlet –gateway http://13.69.84.16:8080

Use curl to invoke the function. Replace the IP address in the following example with that of your openfaas gateway:

curl -X POST http://13.69.84.16:8080/function/figlet -d “ghostinthewire5”

If you made it this far and now have a working deployment of openfaas on AKS — congratulations! Try out a bunch of functions from the store, or use the openfaas CLI Tool to build your own functions and deploy them.

Hope this was helpful!

I was amazed by how quick I managed to get this simple demo configured. I only have a little experience of Kubernetes and was able to get this working in less than 10 minutes (When you consider AKS took almost double the time to provision), which is awesome considering all the complexity this abstracts away from you.

Another big part of this demo that hasn’t been mentioned is the incredible support I received from the Founder Alex Ellis and the very active OpenFaaS community.

For help with OpenFaaS please visit the OpenFaaS community sign-up page.

Please follow and like us:

Create a build pipeline for Angular apps in Azure DevOps

I wanted to show you how I created a Build Pipeline for an Angular App in Azure DevOps.

As always I have this as a task group so I can reuse across projects.

My Task Group for the build is comprised of 5 steps, including tagging the build using PowerShell:

 

 

 

 

 

 

 

 

My Install Node step looks like this, which also add it to the PATH:

 

 

 

 

 

 

 

My NPM install step looks like this:

 

 

 

 

 

 

 

 

My NPM Run Build Script Step looks like this:

 

 

 

 

 

 

 

 

 

This calls the script located in my package.json file as below:

 

 

 

 

 

 

 

I now tag the build using a PowerShell script, this obviously isn’t required but thought I would show it as might be useful for some of you:

 

 

 

 

 

 

 

 

 

 

 

Finally I publish the build artifact:

 

 

 

 

 

 

 

 

 

I hope this shows how easy it is to use Azure DevOps Build Pipelines to build an Angular application with an added bonus of tagging using PowerShell.

You could then have a Release Pipeline use the artifact to deploy to an Azure WebApp or wherever else you wanted.

Hope this was helpful!

Any questions just get in touch via Twitter

Please follow and like us:

Use Git Short Hash in Azure DevOps Release Pipeline

 

I wanted to include the Git Short Hash in the release name of my Azure DevOps Release Pipeline.

I am sure there may be other ways to do this but wanted to show how I did it using PowerShell.

As always I have this as a task group so I can reuse across projects. However, it only has 1 step:

 

 

 

 

The PowerShell within this step looks like this:

$commitId= “$env:BUILD_BUILDNUMBER”

$definitionName= “1.0.0-“

$deploymentId = “$env:RELEASE_DEPLOYMENTID”

$releaseName=$definitionName+$commitId + -$deploymentId

Write-Host (“##vso[release.updatereleasename]$releaseName”)

 

 

 

 

 

 

 

One issue I have found with this is that this obviously only updates once the release has been successfully deployed:

 

 

 

This is because it runs as part of the Release Pipeline. Here is a view of the logs to see it in action:

 

 

 

I figured out the command required to update the release name from these Microsoft Docs at the very bottom under “Release Logging Commands”

Hope you find this post useful and it will help you to build your Infrastructure in Azure using Azure Devops with custom Release Names.

Any questions just get in touch via Twitter

Please follow and like us:

Managing Database Schemas using Azure DevOps

 

A data model changes during development and gets out of sync with the database. You can drop the database and let Entity Framework create a new one that matches the model, but this procedure results in the loss of data. The migrations feature in EF Core provides a way to incrementally update the database schema to keep it in sync with the application’s data model while preserving existing data in the database.

I am using a Task Group for this to keep it as general as possible to allow it ot be used across multiple projects.

My Task Group for the build is comprised of 3 steps, a restore of the solution, the build of the migrations and then a publish of the artifact:

My NuGet Restore step looks like this and also uses the Azure DevOps Artifacts Feed:

My Build EF Core Migrations step looks like this, more info can be found on these scripts here:

ef migrations script -v -i -o $(build.artifactstagingdirectory)\Migrations\$(Build.DefinitionName).sql –startup-project ../$(Build.DefinitionName).API

The final step takes the output of the previous step and publishes it to Azure Pipelines (Artifact Feed):

I use this artifact within a Release Pipeline after I deploy my Web App:

The settings for this look like this:

To give you an idea of the structure, the linked artifacts look like this. This has the app code and the sql script generated in our steps above in seperate folders. (The Deploy Azure App Service step above would just look at the “platform” folder:

 

 

 

 

 

Hope you find this post useful and it will help you to build your Infrastructure in Azure using Azure Devops & Entity Framework Core Migrations.

Any questions just get in touch via Twitter

Please follow and like us:

Using Terraform in Azure DevOps Task Group

I am using Terraform to build my Infrastructure in Azure DevOps using the Task Group feature to keep it generalised. To do this I am using the Terraform Extension from Peter Groenewegen and remote state in Azure Blob Storage.

Thought I would share my setup in the hope that it would be useful for others.

My Task Group is comprised of 2 steps, the Plan & the Apply:

The first part of my Plan setup looks like the below:

 

In the Terraform template path I have used a mixture of built-in system variables and a custom parameter that can be defined in your pipeline

$(System.DefaultWorkingDirectory)/$(RELEASE.PRIMARYARTIFACTSOURCEALIAS)/$(Terraform.Template.Path)

The system parameters can be found here

In the Terraform arguments section I have entered the path to my environment var file. Again I have used the system variable $(Release.EnvironmentName) and ensured that the folder within my Terraform Repo is named exactly the same, along with the tfvars file. This ensures you can keep the Task Group generalised throughout the pipeline.

plan -var-file=”./environments/$(Release.EnvironmentName)/$(Release.EnvironmentName).tfvars”

I have ticked the Install Terraform box to ensure Terraform is installed on the Build Agent.

I have also ticked the Use Azure Service Principal Endpoint Box as per the Terraform best practice guidelines.

The last part of my Plan setup looks like the below:

In this section I declared that I went the state initialised in Blob storage and give the details of where I want this to be stored. Again keeping it generalised by using the system variables.

Within my Terraform code, the Main.tf file has this at the start so it knows to use the remote state:

 

 

 

 

The structure of my Terraform is as below, with the backend.tfvars sitting alongside my $environment.tfvars file. This means you could have completely different settings for each remote state if you wanted. I have it this way as devci1 & qa1 sit in a different subscription to uat1 & prod, so use different blob storage. I also seperate each state into different containers within the blob storage to keep it organised as you will see in the next screenshot.

 

 

 

 

 

 

 

 

The backend.tfvars file looks like the below. The main thing to note is that I change the container name to match the environment (I have changed this to be hard coded for this post but would usually just use the variable to pass the value in to avoid mistakes)

The Terraform Apply section is exactly the same apart from the Terraform arguments section. It has Apply rather than Plan and most importantly the -auto-approve means it doesn’t require any human intervention.

apply -var-file=”./environments/$(Release.EnvironmentName)/$(Release.EnvironmentName).tfvars” -auto-approve 

Hope you find this post useful and it will help you to build your Infrastructure in Azure using Azure Devops & Terraform.

Any questions just get in touch via Twitter

Please follow and like us:

Which Azure Messaging System Should I Use?

At the moment there are a number of Azure messaging services available.

  • Storage Queue
  • Service Bus Queue
  • Service Bus Topic
  • Event Hubs
  • Event Grid
  • IoT Hub

This article gives you a general overview.

Storage QueueAzure-Storage---Queue

Azure Queue storage is a service for storing large numbers of messages that can be accessed from anywhere in the world via authenticated calls using HTTP or HTTPS. A single queue message can be up to 64 KB in size, and a queue can contain millions of messages, up to the total capacity limit of a storage account. The maximum time that a message can remain in the queue is 7 days.

Common uses of Queue storage include:

  • Creating a backlog of work to process asynchronously
  • Passing messages from an Azure web role to an Azure worker role
Service Bus Queue
Azure-Service-Bus-Queue

Messages are sent to and received from queues. Queues enable you to store messages until the receiving application is available to receive and process them.

Messages in queues are ordered and timestamped on arrival. Once accepted, the message is held safely in redundant storage. Messages are delivered in pull mode, which delivers messages on request.

Service Bus Topic
Azure-Service-Bus-Topic

In contrast to queues, in which each message is processed by a single consumer, topics and subscriptions provide a one-to-many form of communication, in a publish/subscribe pattern. Useful for scaling to large numbers of recipients, each published message is made available to each subscription registered with the topic. Messages are sent to a topic and delivered to one or more associated subscriptions, depending on filter rules that can be set on a per-subscription basis. The subscriptions can use additional filters to restrict the messages that they want to receive. Messages are sent to a topic in the same way they are sent to a queue, but messages are not received from the topic directly. Instead, they are received from subscriptions. A topic subscription resembles a virtual queue that receives copies of the messages that are sent to the topic. Messages are received from a subscription identically to the way they are received from a queue.

By way of comparison, the message-sending functionality of a queue maps directly to a topic and its message-receiving functionality maps to a subscription. Among other things, this feature means that subscriptions support the same patterns described earlier in this section with regard to queues: competing consumer, temporal decoupling, load levelling, and load balancing.

Event HubsAzure-Event-Hubs

Azure Event Hubs is a Big Data streaming platform and event ingestion service, capable of receiving and processing millions of events per second. Event Hubs can process and store events, data, or telemetry produced by distributed software and devices. Data sent to an event hub can be transformed and stored using any real-time analytics provider or batching/storage adapters.

Event Hubs is used in some of the following common scenarios:

  • Anomaly detection (fraud/outliers)
  • Application logging
  • Analytics pipelines, such as clickstreams
  • Live dashboarding
  • Archiving data
  • Transaction processing
  • User telemetry processing
  • Device telemetry streaming
Event Grid
Azure-Event-Grid3

Azure Event Grid allows you to easily build applications with event-based architectures. You select the Azure resource you would like to subscribe to, and give the event handler or WebHook endpoint to send the event to. Event Grid has built-in support for events coming from Azure services, like storage blobs and resource groups. Event Grid also has custom support for application and third-party events, using custom topics and custom webhooks.

You can use filters to route specific events to different endpoints, multicast to multiple endpoints, and make sure your events are reliably delivered. Event Grid also has built in support for custom and third-party events.

Iot Hub

Azure-Iot-Hub

IoT Hub is a managed service, hosted in the cloud, that acts as a central message hub for bi-directional communication between your IoT application and the devices it manages. You can use Azure IoT Hub to build IoT solutions with reliable and secure communications between millions of IoT devices and a cloud-hosted solution backend. You can connect virtually any device to IoT Hub.

IoT Hub supports communications both from the device to the cloud and from the cloud to the device. IoT Hub supports multiple messaging patterns such as device-to-cloud telemetry, file upload from devices, and request-reply methods to control your devices from the cloud. IoT Hub monitoring helps you maintain the health of your solution by tracking events such as device creation, device failures, and device connections.

IoT Hub’s capabilities help you build scalable, full-featured IoT solutions such as managing industrial equipment used in manufacturing, tracking valuable assets in healthcare, and monitoring office building usage.

What to use when?

Azure provides myriad options to perform messaging and decouple applications. Which one should you use, and when?

Azure-Event-Grid3Azure-Event-HubsAzure-Iot-HubAzure-Service-Bus-TopicAzure-Service-Bus-QueueAzure-Storage---Queue
Event IngestionXXX
Device managementX
MessagingXXXXXX
Multiple consumersXXXX
Multiple sendersXXXXXX
Use for decouplingXXXXX
Use for publish/subscribeXX
Max message size64K64K256KB1MB256 KB|1 MB64KB
Please follow and like us:

Azure Troubleshooting – VM agent is unable to communicate with the Azure Backup Service

When enabling Backup and Recovery services for an Azure VM, you may get a deployment failed error message:

VM agent is unable to communicate with the Azure Backup Service

With the following error code:

UserErrorGuestAgentStatusUnavailable

This is often because the Azure VM Agent is in a failed provisioning state.

Check Azure VM Agent is Installed

Azure services such as Azure Backup require the Agent extension to be installed.

Windows

To detect if the Windows Azure VM agent is installed successfully, when you logon to a Windows Azure VM, open Task Manager > click the Details tab, and look for a process named WindowsAzureGuestAgent.exe. The presence of this process indicates the VM agent is installed.

The Azure VM Agent is installed by default on any Windows VM deployed from an Azure Marketplace image. Manual installation may be necessary when you create a custom VM image that is deployed to Azure. You can manually install the Azure VM Agent with a Windows installer package.

To manually install the Windows VM Agent, download the latest VM Agent installer from this location.

The VM Agent can be installed by double-clicking the Windows installer file. For an automated or unattended installation of the VM agent, change the name of the downloaded installer if necessary and run the following command:

msiexec.exe /i WindowsAzureVmAgent.2.7.41491.885_180531-1125.fre /quiet

As of this writing, the Windows Azure VM Agent is version 2.7.41491.885.

Linux

NOTE: The following commands are based upon the CentOS 6 operating system.

SSH into the Linux VM.

Check to see if the Azure VM Agent is installed by running the following command:
sudo yum list WALinuxAgent

If the Azure VM Agent is installed, it should return a result similar to the below:

Loaded plugins: security
Installed Packages
WALinuxAgent.noach 2.2.18-1.e16 @openlogic

Check for available updates to the Azure VM Agent with the following command:

sudo yum check-update WALinuxAgent

If necessary, install the latest package version:

sudo yum install WALinuxAgent

Enable Azure VM Agent via PowerShell

 

Once the Azure VM Agent has been installed on the virtual machine, you must use Azure PowerShell to update the ProvisionGuestAgent property so Azure knows the VM has the agent installed.

Azure RM

Open PowerShell as Administrator

Connect to Azure RM account and enter credentials:

Connect-AzureRMAccount

Run the following commands to update the ProvisionGuestAgent property to be set as True:

$vm=get-azureRMvm -ResourceGroupName <resource group name> -Name <VM name> -DisplayHint Expand

$vm.OSProfile.windowsConfiguration.provisionVMAgent = $True

Update-AzureRmVM -ResourceGroupName $rg -VM $vm

If you run the Get-AzureRmVM command again, the -DisplayHint Expand will show the Windows Configuration -> ProvisionVMAgent property set as True.

get-azureRMvm -ResourceGroupName <resource group name> -Name <VM name> -DisplayHint Expand

Azure Classic

Classic Azure deployments will not be accessible through the Azure RM PowerShell. You must use the Classic Azure PowerShell module instead.

NOTE: User must be a Co-Administrator on Azure Subscription to be able to connect to Azure Classic PowerShell.

Open PowerShell as Administrator

Connect to Azure Classic PowerShell and enter credentials:

Add-AzureAccount

Run the following commands to update the ProvisionGuestAgent property to be set as True:

$vm = Get-AzureVM –ServiceName <cloud service name> –Name <VM name>

$vm.VM.ProvisionGuestAgent = $TRUE

Update-AzureVM –Name <VM name> –VM $vm.VM –ServiceName <cloud service name>

The command should say it was successful once complete.

NOTE: In Classic Azure PowerShell, you cannot see the value of ProvisionGuestAgent property, whether it is True or False. You have to rely on the message saying it succeeded.

Please follow and like us: