One of the great things about consumption based billing is that you only pay for what you use. In most cases that’s just a great marketing strap line as servers tend to be up all the time.

However, if you are using RDS Session pools running on Azure or Azure Stack, then the solution I have developed below will really provide you with one of the cheapest Virtual Desktop / Remote working solutions out there.

Now that I have had this stable solution running in multiple implementations for a long time, I wanted to share this with the wider community.

I specifically designed and developed this solution to save my customers significant costs in Azure and / or Azure Stack while at the same time overcoming the limitations of trying to use scale sets to achieve the same goal. On average, clients who provide Virtual Desktops to their users using Session Pools and use this solution save at least 40% to 60% of their compute running costs for the solution month on month.

I have a client who is saving close to 80% due to the fact they only get peak demand once a month and the rest of the time the farm operates at around 20% capacity. The point is, results will vary but irrespective of the deployment some form of cost saving will be had, guaranteed.

Consider this as part of your overall planning. Some people may deploy fewer, larger size session hosts. With this solution its recommended to have slightly more, smaller sized session hosts. Consider that we’ll be able to power up and down hosts automatically, so having four F8s class VMs will be more efficient than having two F16s VMs for example.

Lets discuss some of the challenges we overcome with this solution:

  1. Required capacity – This solution allows you to deploy the full capacity. Say 5 hosts specified for 16 user connections per host but only pay for the storage annually and then only generate compute costs (the lion share of the billing) as required.
  2. Scaling Out – The time it would take to deploy a new session host from image, join it to the domain and then the farm and have it ready for user connections is not feasible. This solution takes advantage of pre-deployed and pre-configured hosts that are powered up as needed within a minute based on an administratively defined “max-sessions” variable.
  3. Scaling Down – One can not just power off a hosts as a scale set might do. Hosts need to be monitored and drained and then, when there is no risk, power off the host. This solution makes use of both a “maintenance-window” and “min-sessions” variable to ensure there is no impact to the end users.

Lets look at the solution components themselves.

Note: This solution assumes you understand all the components of an RDS Session Host Farm and some basic concepts around Azure / Azure Stack, Azure Active Directory / Active Directory.

  1. We are going to setup a scheduled task running on the RDS Broker which will run the capacity script.
  2. Its is always recommended to sign your PowerShell scripts with a trusted code signing certificate. This has two advantages. Firstly, only signed code will run on the server. Secondly, if somebody modifies your code it will not run. In addition to the script itself, there will be a log file showing the operational output and a text file to make our capacity script stateful (required for draining hosts. I.E. hosts with existing user sessions).
  3. The capacity script needs to be able to interact with the virtual machines hosting your session pools running in Azure / Azure Stack. For this we will use an application principal and a self-signed certificate with a non-exportable private key to authenticate with Azure Active Directory.
  4. Your session hosts will either be joined to Azure Active Directory: Domain Services or Active Directory: Domain Services. So we will need a service account, which runs our scheduled task, to interact directly with WMI on the session hosts.
  5. The script requires fine grained permissions which are covered by the Virtual Machine Operator role (Enumerate, power up and down Virtual Machines).
  6. The script requires local administrator permissions on the session hosts in order to block any new connections during draining operations.

Below are the steps required to get the solution running.

Create Azure Active Directory Principal

  1. From an elevated PowerShell window running on the RDS Session Broker, create a (correctly defined) self signed certificate which we will use to authenticate our Azure Active Application Principal.
New-SelfSignedCertificate -Subject "VDI Capacity" `
                          -KeyAlgorithm RSA `
                          -KeyLength 2048 `
                          -NotAfter (Get-Date).AddYears(10) `
                          -KeyExportPolicy Exportable `
                          -KeySpec Signature `
                          -CertStoreLocation "cert:\LocalMachine\My"

2. Connect to Azure Active Directory and create the Application Registration.

Make a note of the Application Id and Directory Id. You will need this later.

Under certificate and secrets, export the certificate you created in step 1 above. Make a note of the thumbprint. You will need this later.

Install Azure or AzureStack PowerShell Modules

Note: Make sure you follow the relevant links carefully and the two modules are not inter changeable.

For Azure Stack –

For Azure –

Create / setup the Virtual Machine Operator Role in the target subscription

Note: If you are using Azure Stack you will need to create the role first.

For each of the sessions hosts. Grant your AAD Application Id the Virtual Machine Operator permission

Create the Azure Active Directory: Domain Services / Active Directory: Domain Services Account

For Azure ADD:DS, you will create your user through the Azure Active Directory interface.

For AD:DS, you will either use the Active Directory RSAT tools or PowerShell module.

Make a note of the username and password. You will need this later.

Grant the AD:DS Service Account local administrator permissions on the RDS Farm

  1. In both Azure AD:DS or AD:DS, you can use Group Policy to accomplish this.

Get and setup the capacity script to execute as a scheduled task on the RDS broker.

  1. Download the capacity script from here.

2. Populate the variables at the top of the script as per your requirements and using the variables you collected in the procedure above.

# RDS Variables
# This is one of the brokers in the farm
$Broker = ""
# The name of the virtual desktop collection
$CollectionName = "Virtual Desktop"
# Is it a Highly Available Deployment
$HA = $false

# Session Limits
# The maximum number of sessions before a new host is added (If in doubt, make this half of your max capacity)
$MaxSessions = 8
# The minimum number of sessions before host draining and power off occurs (If in doubt, make this half of your scaling capacity)
$MinSession = 4
# The minimum amount of hosts that should be online (If yours is a HA deployment set this to 2)
$MinHosts = 1
# Maintenance window for reducing session hosts
# Deprovisioning will only be attempted after this time
$StartTime = "13:00:00"
# Deprovisioning will not occur after this time
$EndTime = "05:00:00"

# Azure application / service principle name
# The Azure AD Application Id
$AppId = "f69df505-8d0a-4d28-b2c8-9d76253d9f8e"
# The Azure AD Tenant Id
$TenantId = "f798b6f5-00d9-4480-822a-4c8e5cd7d890"
# The Azure AD Application Certificate Thumbprint
$Thumbprint = "e3b5953fb64f3815fecaa5a502495ce986f3f990"
# Logging and tracking
# Script Log File Path
$LogFile = "C:\Maintenance\capacity.log"
# Script Database File | Note: This makes the script stateful!
$DatabaseFile = "C:\Maintenance\capacity.json"
# Store the environment type as either: Stack | Public
$Environment = "Stack"
# If Stack then setup management endpoint
$ManagementEndpoint = "https://management.local.azurestack.external"
# Virtual machine resource group
$VMResourceGroup = "lab-vdi"
# Subscription Id
$SubscriptionId = "a5bed567-3f86-4a59-ad80-d13963bb79d1"

3. Create the scheduled task on the RDS broker which has the correct Azure / Azure Stack PowerShell modules installed.


During the maintenance window the script will only look to raise capacity as required.

During the maintenance window the script will look to reduce an unused capacity to save costs

Everything is written to the log file and the log file will roll over when it gets to a MB automatically.

As always, I hope you enjoyed the article and it was of some assistance.