Automating Windows environments setup with Boxstarter and Chocolatey packages

Chocolatey is command line package manager for Windows that gives you a very Linux -esque software installation experience. This guide expects that you already are using Chocolatey, but in case you need convincing here’s what makes it so awesome, for example: choco install googlechrome will install Google Chrome on your computer without having to wait for the installer. You can even get fancy and list as many packages as you would like with a -y flag to automatically accept any prompts: choco install -y azcopy firefox awscli Can’t undersell how easy this makes to set a computer up for the first time.

Boxstarter uses Chocolatey packages but adds a few extra tools that allow you to install software faster and make changes to Windows settings. Boxstarter has some amazing functionality that I am not going to touch on here, but I would recommend Checking out their Docs.

Boxstarter  is now being managed by Chocolatey. still exists, but the source repository is now under the Chocolatey org on Github.

Microsoft are contributing Boxstarter scripts in a new Github repo –

If you’re looking to use Boxstarter to automate the software installation of your Windows machines, there’s a few tricks and traps worth knowing about. The below sections came from the awesome David Gardiner and the comments to the issue he raised in the repo:

Avoid MAXPATH errors

It’s worth understanding that Boxstarter embeds its own copy of Chocolatey and uses that rather than choco.exe. Due to some compatibility issues Boxstarter currently needs to embed an older version of Chocolatey. That particular version does have one known bug where the temp directory Chocolatey uses to download binaries goes one directory deeper each install. Not a problem in isolation, but when you’re installing a lot of packages all at once, you soon hit the old Windows MAXPATH limit.
A workaround is described in the bug report – essentially using the --cache-locationargument to override where downloads are saved. The trick here is that you need to use this on all choco calls in your Boxstarter script – even for things like choco pin. Forget those and you still may experience the MAXPATH problem.

To make it easier, I add the following lines to the top of my Boxstarter scripts:

$ChocoCachePath = “C:\Temp”
New-Item -Path $ChocoCachePath -ItemType directory -Force

And then I can just append –cacheLocation $ChocoCachePath to each choco statement. eg.

cup docker-desktop –cacheLocation $ChocoCachePath
cup docker-compose –cacheLocation $ChocoCachePath
cup minikube –cacheLocation $ChocoCachePath

Avoid unexpected reboots

Detecting and handling reboots is one of the great things about Boxstarter. You can read more in the docs, but one thing to keep in mind is it isn’t perfect. If a reboot is initiated without Boxstarter being aware of it, then it can’t do its thing to restart and continue.

One command I’ve found that can cause this is using Enable-WindowsOptionalFeature. If the feature you’re turning on needs a restart, then Boxstarter won’t resume afterwards. The workaround here is to leverage Chocolatey’s support for the windowsfeatures source. So instead of this

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-All

Do this

cinst Microsoft-Hyper-V-All -source windowsfeatures

If you have a more intricate Boxstarter script, you may run into some problems that you need to diagnose. Don’t look in the usual Chocolatey.log as you won’t see anything there. Boxstarter logs all output to its own log, which by default ends up in $env:LocalAppData\Boxstarter\Boxstarter.log. This becomes even more useful when you consider that Boxstarter may automatically restart your machine multiple times, so having a persistent record of what happened is invaluable.
The other things you might want to make use of is Boxstarter-specific commands like Write-BoxstarterMessage (which writes to the log file as well as the console output) and Log-BoxstarterMessage (which just write to the log file)

Find out more about these and other logging commands by running help about_boxstarter_logging.

My scripts

I keep my Boxstarter scripts at Feel free to have a look and fork them if they look useful.

how to use

Whenever I need to build a new laptop, I just run the following 2 commands:

. { iwr -useb } | iex; get-boxstarter -Force

The above installs BoxStarter

Followed by this in an elevated PowerShell command prompt:

Install-BoxstarterPackage -PackageName -DisableReboots

This downloads the file from the gist and begins installing everything

Hope this was helpful!

Any questions just get in touch via Twitter

Please follow and like us:

Demo of – Awesome Observability
 provides real-time system debugging, distributed tracing for microservices,and logging, alerting, dashboards, and observability for services. Architecting, planning, configuring, deploying, and maintaining an internal equivalent would cost orders of magnitude more than using Honeycomb’s service.

I wanted to improve the visibility of our Azure hosted platform and thought I would give Honeycomb a go.

I decided to get something simple setup and managed to go from having no account to being able to search data from our NGINX server in much less than 5 minutes which I thought was pretty impressive!

I have a CentOS VM hosted in Azure which runs NGINX, I wanted to be able to search through its logs without having to SSH into the server.

I decided to setup a Free Trial of Honeycomb which is just a matter of simply signing up with your work email and verifying that address.

Once confirmed I was presented with options of what data I wanted to ingest:

For this, I chose NGINX and was presented with instructions of how to install the service on my server:

The above install instructions installed a binary file but I found I got a “honeytail – command not found” error. So I used the following instead (which I found here):

wget -q && \
echo ‘bd135df2accd04d37df31aa4f83bd70227666690143829e2866de8086a1491d2 honeytail_1.733_amd64.deb’ | sha256sum -c && \
sudo dpkg -i honeytail_1.733_amd64.deb

Once installed, the next step should run honeytail and send the data into your honeycomb account.

The first error I got using the instructions above was “permission denied” so I had to use it using “sudo” – This will obviously depend on your setup

The next error complained about it missing the required option “–nginx.conf=”

The final error I got was “log_format” missing in given config. You can find this in your nginx.conf, mine was “upstreamlog” but again this will depend on your environment.

In the end, the command I ran was the below:

sudo honeytail –parser=nginx \
–writekey=xxxxxx1111111111xxxxxxxxxxx \
–nginx.conf=/etc/nginx/nginx.conf \
–dataset=”NGINX Logs” \
–nginx.format=upstreamlog \
–file=/var/log/nginx/access.log –backfill

Now this server doesn’t get much traffic so there wasn’t loads of data but within about a second the data was in my honeycomb account and I had an email to confirm this.
















I liked how it had parsed it and displayed the schema of your data:

From here you can query any of your data in real-time. It really lets you ask questions on the fly that look more like business intelligence queries, over arbitrary keys and values.

The nearest product I have used like this is probably Sumo Logic, which is similar in the real-time query and sharing these queries amongst teams etc.

However, I have to say Honeycomb was far better!

It just felt like it was built by people who care and I am sure it will continue to grow as we try to get away from traditional “monitoring” metrics.

I was amazed by how quick I managed to get this simple demo configured. Even with a couple of issues (Which I am sure were more to do with me / my environment, than Honeycomb) it still took less than 5 minutes from start to finish!

Grab yourself a Demo account here and have a go yourself.

Hope this was helpful!

Any questions just get in touch via Twitter

Also, from the Honeycomb side – CEO Charity Majors


Please follow and like us:

Deploy openfaas on Azure AKS using Helm

openfaas (Functions as a Service) is a framework for building serverless functions with Docker and Kubernetes which has first class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.

I was impressed that I managed to get this all setup and working in less than 10 minutes (If we forget that AKS took almost 20 minutes to provision!)


In order to complete the steps within this article, you need a basic understanding of Kubernetes & the following.

  • An Azure subscription
  • A working installation of Kubectl (tutorial here)
  • A working installation of Helm (see here)
  • Install the openfaas CLI. See the openfaas CLI documentation for options.
  • Azure CLI installed on your development system.
  • Git command-line tools installed on your system.
Deploy the cluster with the Azure CLI


1) Install the official latest version of Azure CLI

2) Login to your subscription:

az login

Optional: If you have multiple subscriptions linked to your account, remember to select the one on which you want to work. (az account set -s subscription-id)

3) Create the resource group in which you want to deploy the cluster (in the example ghostinthewiresk8sRG is the name of the Resource Group and westeurope is the chosen location):

az group create -l westeurope -n ghostinthewiresk8sRG

4) Finally, create your cluster. This will create a default cluster with one master and three agents (each VM is sized, by default, as a Standard_D2_v2 with 2vCPUs and 7GiB of RAM):

az acs create –orchestrator-type Kubernetes -g ghostinthewiresk8sRG -n k8sCluster -l westeurope –generate-ssh-keys

Optional: you can specify the agent-count, the agent-vm-size and a dns-prefix for your cluster:

–agent-count 2 –agent-vm-size Standard_A1_v2 –dns-prefix k8sghost

5) Get your cluster credentials ready for kubectl:

az acs kubernetes get-credentials -n k8sCluster -g ghostinthewiresk8sRG



1) Check if the kubectl configuration is ok and if your cluster is up-and-running:

kubectl cluster-info



Deploy OpenFaaS to the newly created AKS Cluster

Please follow the instructions as per the official docs with Helm. Please ensure you use the basic auth as outlined in the steps.

validate openfaas install

A public IP address is created for accessing the openfaas gateway. To retrieve this IP address, use the kubectl get service command. It may take a few minutes for the IP address to be assigned to the service, until then it will show as pending:

kubectl get service -l component=gateway –namespace openfaas



To test the openfaas system, browse to the external IP address on port 8080, in this example:










Create first function

Now that openfaas is operational, you could create a function using the OpenFaaS portal but I will show you how to do it via the CLI.

1) In order to see what Functions are available in the store type:

faas-cli store list

2) We are going to use Figlet to Generate ascii logos through the use of a binary. To install run the following:

faas-cli store deploy figlet –gateway

Use curl to invoke the function. Replace the IP address in the following example with that of your openfaas gateway:

curl -X POST -d “ghostinthewire5”

If you made it this far and now have a working deployment of openfaas on AKS — congratulations! Try out a bunch of functions from the store, or use the openfaas CLI Tool to build your own functions and deploy them.

Hope this was helpful!

I was amazed by how quick I managed to get this simple demo configured. I only have a little experience of Kubernetes and was able to get this working in less than 10 minutes (When you consider AKS took almost double the time to provision), which is awesome considering all the complexity this abstracts away from you.

Another big part of this demo that hasn’t been mentioned is the incredible support I received from the Founder Alex Ellis and the very active OpenFaaS community.

For help with OpenFaaS please visit the OpenFaaS community sign-up page.

Please follow and like us: