Azure, Enterprise Agreements and MSDN Subscriptions

Just a quick note about something I must have missed the press release on in the Azure and MSDN space:

In the past, if you had an MSDN subscription and were doing development or testing work in an Azure subscription paid for out of an Enterprise Agreement, you ended up paying for licenses twice.

This is down to the fact that Microsoft factors the cost of licenses for their software, such as Windows Server and SQL server, into their IaaS VM prices. Compare the same sized Linux VM, (which has no license costs), to a Windows Server and the difference is significant. At current pay-as-you-go rates a Basic A1 Linux VM costs £20 per month to run compared to £33.64 for Windows Server.

I don’t know when it changed, but I noticed a new, “MSDN Dev/Test Offer Details”, link in our Enterprise Agreement portal in late April and started investigating. I ended up digging up this page which details the offer.

In addition, this change allows the deployment of Windows client operating systems to these subscriptions. Something which was previously only possible under an individual’s MSDN subscription.

All in all, a good change and one I’m surprised I missed. I hope by blogging about it I can bring it to the attention of others who may also benefit!

Passed “Implementing Microsoft Azure Infrastructure Solutions” (70-533)

In September of last year Microsoft released a couple of new exams; 70-532: Developing Microsoft Azure Solutions and 70-533: Implementing Microsoft Azure Infrastructure Solutions.

Of the two, 70-533 sounded like a good fit for my existing skills and experience. The high level technical areas it covers are as follows:

  • Implement websites
  • Implement virtual machines
  • Implement cloud services
  • Implement storage
  • Implement an Azure Active Directory
  • Implement virtual networks

I’ve worked extensively with virtual machines, virtual networking, and storage in Azure and recently I’ve been working with Azure Active Directory as well. My exposure to the Azure websites and cloud services features isn’t as extensive but I took this as a good opportunity to explore and learn more about both.

Blogger Anders Eider has put together a great post breaking down the high level areas the exam covers and linking out to relevant Microsoft documentation. There was also an event on Microsoft’s Channel 9 site called “Azure IaaS for IT Pros” tailored specifically for people studying for the 70-533 exam. Finally, the Microsoft Virtual Academy has a section dedicated to Azure training resources which also includes some useful resources.

I must say that reading and watching videos really only gets you so far – nothing compares to getting your hands dirty and actually doing this stuff. Luckily, as I say, I’ve recently been working with many of the elements this exam covers as part of my day job. The areas I haven’t been working with, being cloud based in nature, are relatively simple to implement and experiment with. Unlike a subject like Hyper-V or SCVMM which would need lots of physical infrastructure in order to create an environment to experiment and learn, you’ve got no excuse not to roll your sleeves up and play with this stuff. You can even easily get a free trial account with £125 worth of credit to help your learning.

Having studied hard over the Christmas break I booked myself in for the exam, (unfortunately in the UK we don’t yet have the “Home Proctored” exam option but there’s a testing center here in Bath I can use). This morning I passed with a score of 900/1000!

It’s a shame there isn’t an associated Azure certification that this counts towards, (maybe there will be one coming down the line?), but I think this is still a useful exam which helps demonstrate skill and knowledge with what are indubitably new and rapidly evolving tools and technologies.

Migrating a VMWare SharePoint Dev VM to Azure – Part 2

Update: I’ve added disk performance figures for an Azure ‘D’ series virtual machine.

In a previous post I described a process for migrating an on-premises, VMWare based SharePoint development environment up to Azure. Whilst functional in a pinch, performance of the resulting virtual machine left a little to be desired.

The bottleneck seemed to be disk performance. Unsurprising given that the original VM was designed and optimised to run on a single SSD capable of a large number of IOPS, rather than a single, relatively slow disk as offered by an A servies VM in Azure.

First of all, I thought I’d gather some metrics on the performance of the respective disks I was using. Microsoft offers a great tool called DiskSpd for doing exactly this.

Here are the salient results after running DiskSpd on some representative disks:

Storage I/O per sec MB/s Avg. Latency (ms)
Local SSD 28,813.07 225.10 2.22
Local HDD (7,200RPM) 514.08 4.02 124.87
Azure A series OS disk 515.71 4.03 123.36
Azure A series temp disk 67,390.80 526.49 0.95
Azure A series data disk 492.16 3.84 82.76
Azure D series OS disk 972.78 7.60 65.56
Azure D series temp disk 128,866.03 1,006.77 0.495

Little wonder the VM wasn’t performing brilliantly: we’ve got an incredibly heavy workload, (remember, this dev environment is running Active Directory Directory Services, DNS, SQL Server, SharePoint features including the Distributed Cache, Search & workflow as well as client applications like Office, a browser with dozens of tabs open and Visual Studio – just to name a few), all running off a single disk which is only capable of approximately 500 IOPS. The SSD it runs off when stored locally is able to handle this load as it offers a blistering 28,000+ IOPS but what can we do in Azure to improve the situation?

Use the temp disk

Looking at the DiskSpd testing results above, it’d be tempting to try to use the temp disk for as much of the load as possible – it offers an incredible 67,000+ IOPS afterall! (As an aside, I can only imagine this is either a RAM disk or something with crazy fast caching enabled, either way, it’s cool, huh?)

Unfortunately, Microsoft go to great lengths to point out that the temp disk assigned to Azure VMs isn’t persistent and shouldn’t be relied upon for storing data long term (more here):

Don’t store data on the temporary disk. This disk provides temporary storage for applications and processes and is used to store data that you don’t need to keep, such as page or swap files.

This rules the temp disk out for much of what we’re doing, however there’s a clever technique which enable us to configure SQL Server to use the Azure temp disk for it’s TempDB logs and files.

This is certainly something which would have a positive impact on performance (both the TechNet documentation on Best practices for SQL Server in a SharePoint Server farm and Storage and SQL Server capacity planning and configuration (SharePoint Server 2013) state that TempDB should be given priority for the fastest available storage) and the nature of the TempDB (that it’s re-created on start-up of SQL Server) lends itself well to this solution.

Other than storing TempDB, there isn’t anything else I could think of which was transient and would suit storing on the Azure temporary drive.

Add more disks

As the testing with DiskSpd shows, each data disk we attach to our Azure virtual machine provides ~500 IOPS. Different sized Azure VMs allow you to attach different numbers of data disks. We’re using a relatively large virtual machine (an A6) so can attach up to 8 additional disks. How we use these disks is where things get interesting:

  • We could attach disks, allocate each a drive letter, then move SQL databases and log files on to their own drives. For example; one drive for content database log files, one for search databases, one for content databases and one for all the other databases). This would give 500 IOPS to each.
  • We could attach multiple disks, then use Storage Spaces to pool them all, creating a single volume made up of multiple disks. This one disk would have a higher number of IOPS, although it isn’t a linear 500*number-of-disks relationship. I tested 1, 2, 3 and 4 disks pooled and found IOPS went from 492, to 963, to 1434, to 1919.
  • Finally, we could use a hybrid of the two outlined approaches: for performance dependent files pool multiple disks, for less IOPS sensitive data, use just a couple of pooled disks, and for latency agnostic requirements, use single disks.

Note: it’s important to remember that where Storage Spaces is sometimes used to provide resiliency through RAID like features such as Mirroring and Parity, we’re only interested in the performance improvements it gives us – resiliency is afforded through the underlying Azure infrastructure (whereby writes to VHDs are replicated synchronously to 3 locations in the current datacenter and potentially to a further 3 locations in a paired datacenter asynchronously).

Use an Azure virtual machine with faster storage

Instead of an ‘A’ series virtual machine, we could look at using the already available ‘D’ series. The primary difference these offer is an SSD based temp disk. From the performance figures the ‘D’ series disks return, we see a slight increase in performance of the OS disk and a frankly staggering performance of the temp disk. Although they’re considerably more expensive, the faster storage included with a ‘D’ series VM would clearly help.

Incidentally, Microsoft recently announced a new tier of virtual machines called the ‘G’ series. These are primarily billed as providing larger local SSD storage options. For our purposes, (where we can only really take advantage of the temp disk for our SQL Server’s TempDB), I don’t think this will help us much.

When they announced the ‘G’ series VMs, Microsoft also announced a new, SSD based, ‘Premium Storage’ offering. Details are trickling out but it sounds like we will be able to use employ those techniques already described, (attaching additional data disks – this time SSDs – and striping across them using Storage Spaces), to provide very fast storage for our SQL requirements.

Consider another cloud provider

The final approach to consider, if we want to provision a virtual machine with potentially faster storage, is an entirely different public cloud provider. Among the options is Amazon’s AWS offering. I haven’t used AWS a huge amount but their i2.xlarge VM instance offers 4 CPUs, 30GB of RAM and a single 800GB SSD which sounds a great fit for what we need.

SQL legend Brent Ozar recently performed some testing of AWS VM performance and compared and contrasted it with Azure. His findings, for the specific scenario he considers, show AWS to be cheaper and better performing.


We can either increase the IOPS available to our virtual machine for little additional cost by adding disks, striping across them & reconfiguring SQL to spread the load across them or we can look to take advantage of higher performance public cloud storage – though this may incur an increased cost.

For my own scenario, I can see that having a SharePoint development environment which works reasonably well and can be spun up quickly will enable developers to start work right away whilst appropriate hardware is ordered and provisioned for them to use on-premises in the longer term.

Migrating a VMWare SharePoint Dev VM to Azure

My normal SharePoint 2013 development and testing environment is a single VMWare virtual machine. It runs Windows Server 2012 R2 and has installations of SQL Server 2012, SharePoint 2013 and Visual Studio 2013. To accommodate all of this, I have a big beefy desktop machine with 32GB of RAM and a couple of solid state hard drives: one for the host operating system and applications and another one to host the hard drive of the virtual machine.

A pretty typical setup, I think?

If a developer starts work on a SharePoint project they may not have a machine capable of hosting that environment right away. Often it needs to be ordered in for them which may take days or weeks. Likewise, they may be only need access to a development environment temporarily.

In order to provide swifter access to a working environment, I thought it might be interesting to explore converting my existing local SharePoint development VM to run in Azure.

Here are the steps I took to achieve this and what I learnt from doing it.

Stage 1: Create the VHD

VMWare virtual machines use a format of disk (VMDX) which isn’t compatible with Azure. The first step is therefore to convert the disk to the compatible VHD format.

  1. Inside my running VM I’ll download the Microsoft Disk2VHD tool.
  2. Next, I run Disk2VHD and un-tick the “Use VHDX” option in order to make sure we create a VHD disk rather than the newer VHDX format.
  3. In my scenario I want to create the VHD on a disk on the machine hosting the VM. I can do this by specifying the host PC’s IP in the “VHD File name” dialog box then specifying the admin share of the drive in question (f$ below).
  4. The creation of the VHD will take some time so this is an ideal opportunity to grab a nice cup of tea and a biscuit.

(Optional Stage 1.5: Test VHD in Hyper-V)

Before uploading your newly created 100GB+ VHD to Azure and hoping it’ll work, you can try testing it locally. You’ll need a Hyper-V server with enough resources to host it but it should just be a case of creating a new VM with your VHD as the disk.

Note: I haven’t got a Hyper-V host to hand with enough memory so I’m skipping straight to popping my disk into Azure.

Stage 2: Upload the VHD to Azure

Once stage 1 is complete, we should have our VHD ready to upload to Azure.

To proceed you’ll need to ensure you have access to an Azure subscription, a storage account within that subscription and the Azure PowerShell module installed and configured on your local machine.

Once that’s all ready it’s really just a case of running the “Add-AzureVhd” PowerShell command. We have to specify the “FileLocation” parameter and point it at our newly created VHD file as well as the “Destination” parameter.

The destination is the location we want to copy the VHD to. This is made up of the URL for our storage account, the folder within the storage account that the VHD will be stored in and the name we’re going to give the VHD file in Azure. Mine will look a bit like this:

What’s nice about this command is that it does sensible things like create an MD5 hash of your local file then check the file you’ve uploaded has the same hash upon completion. It also supports resuming of your upload should it be interrupted.

When the upload completes you should be able to browse to your Azure storage account, drill down to the vhds container and see your uploaded VHD.

Stage 3: Create a new VM image from the uploaded VHD

Back in the Azure managment portal we need to tell Azure this isn’t just any old VHD, this is something it can use as a template to create new VMs from. We do this by creating an “Image” from the VHD.

  1. Navigate to the “Virtual machines” section, click the “Images” heading then choose the “Create an image” option.
  2. On the “Create an image” screen give your VM a sensible name and description then select the VHD you uploaded earlier. You also need to specify that the VM is running Windows and tick the “I have run sysprep” box (even though we haven’t!)

Stage 4: Create a VM from the image

Having created an image from the uploaded VHD when we now go to create a VM we can drill down into the “My Images” section of the gallery and find our newly created image.

When we create a VM from the image it will take a long time to provision and start up. If you think about it, we’ve effectively pulled the rug out under our VM and moved it from the VMWare virtual hardware to Hyper-V’s. It has to rediscover that hardware and setup drivers to be able to talk to it.

This is another great opportunity for a cup of tea and a biscuit.

Once the VM has completed provisioning you should be able to connect to it over RDP and start using it.

Stage 5: Troubleshooting

VM is unresponsive to Remote Desktop

I’ve had mixed results with this approach. On more than one occassion the VM provisioned and started up fine but I couldn’t connect to it over RDP. I was able to ping it and even connect via PowerShell so I knew it was running, it just wouldn’t allow me to remote into it.

I followed some instructions on re-enabling remote desktop remotely, restarted the terminal services service and eventually it started working but I’m not entirely comfortable not knowing why it stopped in the first place.

Errors in SharePoint

Once connected to my VM the first thing I did was fire up SharePoint Central Admin. It took a while but eventually I was presented with nasty looking IIS / .NET error pages. A little digging revealed that the network card installed when the VM was created in Azure is configured to use DHCP to get both its IP address and DNS.

As my standalone VM is configured to be DNS for itself I simply had to change this to use for DNS and things were happy again.

Stage 6: Using the VM

Whilst it just about works, using the VM on an on-going basis would be pretty painful. It’s hampered by the low disk performance Azure provides. I grabbed a couple of screenshots from the Resource Monitor tool to illustrate this. Of particular interest is the “Response Time” column I’ve highlighted. This shows how long each process is having to wait to get a response from the underlying storage infrastructure.

On the local copy of the VM, hosted on a dedicated SSD response times are generally less than 5 ms:

Local VM hosted on a dedicated SDD.

By way of contrast them same VM hosted in Azure shows response times in the thousands of milliseconds:


VM hosted in Azure on a single disk.


Clearly, this particular VM was originally architected to take advantage of a single, fast SSD. Doing a ‘lift and shift’ to just dumping it in Azure was never going to provide an optimal solution. Having said that, the fact that it works at all and is useful in a pinch is pretty impressive.

The key takeaway from this should be that demanding workloads such as SharePoint can be implemented successfully on the Azure platform however there are a number of very specific considerations which should be taken into account when designing and implementing that underlying infrastructure. Expecting existing VMs to be able to be migrated across and work just as well as they did on-premises is naïve.

I’ll be sure to create a follow up to this post with some lessons I’ve learnt from implementing SharePoint and other demanding workloads in Azure.

Sorting the SharePoint Quick Launch Menu in PowerShell

I recently had to work with a SharePoint site which had accumulated an enormous number of links in its left-hand, (or ‘quick launch’ in SharePoint parlance), menu. Unfortunately, I wasn’t able to activate the ‘SharePoint Publishing Infrastructure’ feature for the site in question, meaning I couldn’t use the automatic sorting functionality. Instead, all I had available in the interface was the clunky list of items with a pull-down menu to specify the position they should appear in.


Unsorted quick launch and rudimentary sorting interface.

A quick search showed up lots of examples of how to add and remove individual links or how to export /  import via an XML file, but sorting seems not to come up. (Maybe everyone else is better at managing what ends up in their navigation elements?). Regardless, armed with my trusty PowerShell ISE I reckoned I could whip something up to fulfil my requirement.

My plan was simple:

  1. Get the specified site’s quick launch.
  2. Create a sorted copy of the links in the quick launch.
  3. Clear the current quick launch.
  4. Copy back in my sorted list of links.
  5. Treat myself to a nice cup of tea and a biscuit.

Here’s the quick and dirty solution I came up with:

That fixed my immediate problem but when I get five minutes I’ll come back and make a function out of it. I’d like to be able to specify the site and the specific section I want sorted, perhaps with the ability to sort the top-level sections too.

First Steps With PowerShell DSC and Azure IaaS

Earlier this month, Microsoft announced new functionality in the PowerShell SDK which allows us to use Desired State Configuration (DSC) in combination with Microsoft’s Azure IaaS offering.

The introductory examples of DSC I’ve seen so far tend to concentrate on the WindowsFeature and File resource types. Whilst these are no doubt useful, (and quite spiffy), I couldn’t see how to use them to achieve some of the more complex operations I’d like to undertake when provisioning VMs in Azure.

One of the tasks I always perform on freshly minted Azure VMs is to change the power plan in Windows from the default of ‘Balanced’ to ‘High Performance’. This is a simple step you can take to improve performance but it’s a hassle to do it manually every time you spin up a VM.

Step up DSC!

Writing the Script

First up I created a script which defines the configuration I want to enact on my new VM:

I’m still new to DSC but I think of ‘Configuration’ as a function and the ‘node’ construct a way to scope which hosts the function is going to run on.

Once the function and the scope are defined we get down to business. Normally, this is where the examples start adding IIS or copying files around. What I wanted to do was run a command. Specifically, the command to set the power plan to high performance.

The script resource in DSC allows us to do exactly this. It’s broken into three sections:

  • TestScript – a block of code which, when run, checks whether the power plan is set to high performance or not. If it returns false the SetScript block is run.
  • SetScript – a block of code which does the heavy lifting. In this instance, setting the power plan to high performance.
  • GetScript – another block of code which, apparently, must return a hash table. (I haven’t really got my head around the GetScript section – more reading required on my part…)

Compiling the Configuration

Having written our script we need to compile it into a MOF. I had a bit of trouble with this but eventually figured out I needed to dot source my script before calling the function configuration I had defined within it (‘SetPowerPlan’ above).

This creates a folder in the current working directory named SetPowerPlan which contains a file called localhost.mof.

Publishing the DSC

Now we’ve written and compiled our configuration it has to be published so it’s accessible to our freshly minted VMs. The new ‘Publish-AzureVMDscConfiguration’ commandlet enables this.

It bundles our compiled code into a .zip file and uploads it to an Azure storage container – where better to provide a central location for our nascent VMs to read their configurations from?

Including the DSC When Provisioning a VM

Now our script is written, compiled and in a location new VMs can access it, we just need to make sure it’s run when we provision a new VM. We can do this by using the ‘Set-AzureVMDSCExtension’ commandlet to inject the DSC script into the VM configuration object.

Then, when the new VM boots, the Azure VM agent will install the PowerShell DSC Extension, which in turn will download the ZIP package that we published previously, execute the “SetPowerPlan” configuration that we included as part of SetPowerPlan.ps1, and then will invoke PowerShell DSC by calling the Start-DscConfiguration cmdlet.

Here’s an example of creating a new VM which includes our DSC script:


So there we go, still a fairly simple example but my mind is already racing trying to come up other uses for this. Another obvious one I’m thinking of is setting the correct locale and time zone in new VMs but really, the possibilities are endless.

SharePoint Online Provisioning Woes – Resolved!

Update: 6 hours after purchasing the subscription, (and writing this post), things started working. The licenses are now available and can be assigned to my users just fine. Normal behaviour or just bad luck? I’m sure I’ll be doing more work with SharePoint Online and Office 365 in the future so it’ll be interesting to see.

This is something fairly new to me but I was recently helping a customer with a SharePoint Online proof-of-concept when I encountered an odd issue around assigning SharePoint Online licenses to users.

After successfully creating the tenant I went ahead and purchased an initial 2 licenses for SharePoint Online (Plan 1). These show up in the Office 365 admin center under the ‘Billing’ menu:

Office 365 subscription added successfully

Office 365 subscription added successfully.

Selecting the ‘Licenses’ menu however, tells me I don’t currently have any licenses. Helpfully, it points out how I can purchase a subscription though:

Office 365 licenses missing.

Office 365 licenses missing

As you might expect, attempting to assign licenses to users fails.

I’ve submitted a service request so it’ll be interesting to see how the issue is dealt with.

SharePoint Newsfeed app for iOS Updated

Microsoft have made it abundantly clear that the public cloud based Yammer application is their recommended social tool for use with SharePoint and their other cloud based services. They’ve also made it clear that the social features present in SharePoint 2013 won’t be developed further.

Imagine my surprise then, when I saw that the SharePoint Newsfeed app I have installed on my iPad received an update earlier this month. For anyone who hasn’t used it, this app connects to a user’s SharePoint Newsfeed and presents it’s content in a fairly compelling format.

The update note states, “This release is focused on improving stability and fixing bugs.”

No new features it seems but it’s nice to see it’s being given some love.

SharePoint Newsfeed App for iOS

SharePoint Newsfeed App for iOS update note.

Implementing Azure Internal Load Balancing

One of the recently announced new features of Azure is Internal Load Balancing (ILB) support. For a long time it’s been possible to define external endpoints for a virtual machine running in Azure. If you have multiple virtual machines you can configure the same endpoint on each and allow Azure to load balance traffic across them.

By distributing traffic across multiple hosts you’re able to service a greater number of incoming requests.

Load balancing can also help with high-availability: if a host in your load balanced set becomes unavailable (as judged by the configured probe behaviour) the Azure load balancer will stop sending traffic to it.

While this is useful for internet facing scenarios, until this week, if you were using the virtual private network functionality Azure offers, you couldn’t make use of it. As a result you’d end up having to implement Windows NLB or similar solutions – additional configuration and complexity that frankly, you probably don’t want. With the introduction of Azure ILB however, we can delegate the load balancing to Microsoft’s infrastructure instead.

The feature is only in preview for now so there are one or two wrinkles around configuration and it needs planning up front.


Prior to beginning I’m going to assume you’ve already got an Azure account and a subscription setup. You’ll also need to have setup a site-to-site VPN. There are plenty of resources out there which you can use to help get you to this point.

Create Regional Virtual Network

As noted in Scott Guthrie’s blog post announcing the new feature, “ILB is available only with new deployments and new virtual networks”. What isn’t obvious however, is that in order to be able to create an ILB you also need to be using another new feature: regional virtual networks.

As noted in the FAQ of that post about regional virtual networks, “The newly announced features [Reserved IP, Internal load balancing and Instance level Public IP] are all managed at a regional level, hence they are only available for deployments going into a Regional Virtual Network.”

The blog post does a good job of explaining how to create a regional virtual network but in summary it’s a case of:

  • Create a virtual network as normal
  • Export the virtual network configuration to XML
  • Delete your virtual network
  • Edit the downloaded XML and change AffinityGroup=”<NameOfYourAffinityGroup>” to Location=”<NameOfYourRegion”>
  • Recreate your virtual network with the updated XML configuration as input

Once you have your regional virtual network created  you can go ahead and create a cloud service and add VMs to it as normal.

Add an Internal Load Balancer Instance

Once you have a cloud service configured you can create an ILB associated with it. For the time being this is only possible through PowerShell but hey, you’re scripting all this configuration anyway so that you can re-do it quickly, easily and reliably in the future, right?

The commandlet you need is Add-AzureInternalLoadBalancer. You’ll need to pass in the  name of the cloud service you want to ultimately create the load balanced endpoint for, as well as picking a name for the ILB. You’ll need this later so name it sensibly. Optionally, you can specify an  IP and subnet within your virtual network to be used.

Add EndPoints to VMs in the Load Balanced Set

Each VM that’ll service requests needs to have an endpoint created and associated with the ILB you just created. Again, this can only be done through PowerShell at this time but the syntax is similar to creating a normal endpoint using the Add-AzureEndpoint commandlet except you use the -InternalLoadBalancerName parameter to specify the ILB you created earlier.

You’ll need to run something like the following command for each machine you want to add to your load balanced set:

If you’ve followed the content of this post to this point the above command will fail. This is due to another wrinkle with the way ILB works currently. For some reason you can’t create an endpoint attached to an ILB unless there’s at least one external endpoint defined somewhere in your deployment. I haven’t yet narrowed down the specifics of this external endpoint requirement so for the time being I simply created a dummy external endpoint on a random high port then attached an ACL to that endpoint to block any traffic to it.

Having done that, I was able to add internal endpoints for each VM with no problems.

All that remained was to confirm the load balancer was working as expected.


A site-to-site VPN was already a compelling way to extend your on-premises infrastructure into Azure IaaS. This new internal load balancer functionality further eases the migration of on-premises workloads to the cloud.

I’m excited to try spinning up a load balanced Office Web Apps farm as well as a SharePoint farm with load balanced web front ends.

I’ll be sure to post my findings here.