I have been involved in a BizTalk project recently and often people assume “I am doing BizTalk on-premise so why do I need Azure”. On the face of it, this project would have been one of those who a few years ago probably wouldn’t use the cloud but today and with a forward-thinking organisation we were able to leverage the cloud to really help us.

By the way, if you like this article you may find the Integrate 2016 summit very interesting too so if you come to the summit and would like to know more about this article please say Hi if you see me at Integrate 206.



First off lets take a look at the production architecture so we can see that the cloud isn’t really required.


In the production architecture, you can see that we have a typical organisation. There is an Active Directory server, also there is an SMTP Server. The BizTalk setup is shown in the below left box, we have BizTalk 2013 R2, a SQL Server for use by BizTalk and then a server with BizTalk 360 for monitoring and operations.

The BizTalk environment will integrate with the line of business applications of which the main one is Dynamics CRM which is an on-premise installation and is new to support this transformation project. The BizTalk environment will also integrate with some B2B business partners via an SFTP Server hosted in the company.

The below diagram shows a rough interpretation of this architecture.

For this scenario there were a few limitations that prevented the use of cloud, these were mainly due to data security and compliance constraints around the organisations geographic location. While the cloud could not be used for production, there were a number of big opportunities the cloud offered which could help us to deliver this project successfully.


In the development arena, we all know that getting the resources you need for a development team to successfully build good solutions is difficult. That said we were able to use Azure to release the constraint of on-premise infrastructure on our development capability. This also meant that we could work as a globally distributed team effectively. Microsoft Azure is a widely-used program for businesses to build and manage applications. Due to how beneficial Azure is, a lot of people like to take the official exam to ensure they fully understand how to use Azure correctly. When revising for an exam like that, some people will look at something similar to these az-203 exam dumps to help them prepare. Having a deep knowledge of how Microsoft Azure works would’ve been really beneficial for this project.

The architecture of the development environment was as follows:

If we think back to the core cloud services we used in development then it was as follows:

  • Visual Studio Team Services
  • Microsoft Azure PaaS
  • Microsoft Azure IaaS
  • Confluence

First off let’s start with Confluence. Confluence is to wiki products what superman is to super hero’s. No, I don’t mean it has just been killed by a competitor which has been brought back from the dead by Lex Luthor. As a tool Confluence is my favorite for helping a group of people work together and collaborate on gathering the requirements for integration projects. There is often lots of analysis required, you need to capture and record information and be able to articulate relationships between bits of information. Unfortunately, there is nothing in the Microsoft eco-system which I have found which really does this in a way which brings your team together as confluence does. In this project, I had multiple people around the world working on requirements as if they were in the same room!

When you combine Confluence for the detailed analysis information with Visual Studio Team Services for work item management then this gives you a pretty solid foundation for a good project.

Next up we also used Visual Studio Team Services. I have already mentioned the work item management so we had features and product backlog items and Kanban boards, but we also used the source control and automated build features to make sure our code was centrally managed and we kept the code good quality and risk-free through an automated build process which triggered on check in of changes.

In terms of the actual development, we used Azure IaaS to create a network of virtual machines. The network contained a domain controller so we had a full Active Directory environment to manage the team effectively. We then used the Azure Marketplace to get the BizTalk 2013 R2 Development edition VM image and created some development machines and attached them to the network. One of the machines had the Visual Studio Build Agent deployed to it so that it became a dedicated build server.

The Azure IaaS environment is managed within an Azure Dev Test lab so that we can add some rules around the lab and also use things like auto start up and shut down to reduce costs. We also used Operations Insights (OMS, Log Analytics or whatever its current name is) to monitor some of the VM’s to ensure we had a view on the setup and management of the machines.

By combining Azure + Visual Studio Team Services + Confluence we ended up with a low-cost development environment that we could easily scale up and down on demand and when the project is live we can shut most things down until the next project arrives. The whole setup was up and running within a few days. Compare that to on-premise where you would spend more in cross charges for 1 VM than we spent for the entire project for the whole lab and that we probably would have waited weeks to get the kit the project needed.

Test Environments

When it came to testing there was a number of ways that the cloud could help us. Some of the key features we used are:

  • Visual Studio Team Services for bug tracking
  • Azure IaaS for test environments

Ill also go into a little more detail below.

Non Functional/Performance Testing

In the non-functional testing space, we needed to setup an infrastructure on premise so it was production like so we could replicate a like for like scenario. You would think that this would limit some of our options to use cloud and you would be right. When working to set up any type of infrastructure, particularly if it is a new process to you, you may want to know what is performance testing and how it could benefit your application.

I guess the main thing we used in non-functional and performance testing was Application Insights which we used to stream application telemetry to the cloud so we could see how things were working and get some issue diagnosis. We did this for this kind of testing because the performance and usage analytics would be very helpful but most of the rest of the environment did not leverage the cloud. If you would like to find out more about application testing, you may want to visit a company similar to Parasoft to find out more.

Time Travel Testing

In this project, we had some very interesting time travel testing scenarios. There are some key date based business scenarios to be tested and also some partner constraints and a significant amount of tests that took longer than 1 day to execute. The up shot of this was that we needed to have a test environment that was date rolled to a specific date every day during the test window. The next test window would use the same date again for 5 days of testing. This posed some challenges but we used the cloud to help us here.

We created an entire test environment in Azure using mainly IaaS as shown by the below diagram.

In the diagram you can see it is very similar to the production environment except that we have replaced the SMTP server with SendGrid. This was mainly to simplify things. We have also included Azure Automation. We used a runbook in Azure Automation to manage the time travel actions using PowerShell across the VM’s in the environment.

The great thing is that we don’t need to do time travel testing all of the time, we just need to do it now and again so we can turn off pretty much everything in the environment until it is next needed.

Comparing this approach in the cloud with on-premise the big advantages that Azure offered was the ability to create the lab quickly and at low cost without needing to set up everything on-premise. It is a solution specific approach and it is easy to work out how much the project has spent to the nearest $ rather than vague finger in the air estimates on actual cost. The turn it off until you need it option is massive too.

UAT & Integration Testing

For UAT and other environments, we had pretty much the same setup as the time travel testing. The one major difference was around Microsoft Dynamics CRM. The online SaaS version and on Premise/IaaS versions of Dynamics CRM are very close in terms of capability. We decided to use CRM online to make things easier for early testing environments. This meant the team could focus on delivering features for the business rather than having to worry about managing servers. This supported a much more agile delivery approach when our CRM solution was going through rapid change. With CRM there is a multi-tenant environment so the team could also have multiple CRM environments if they wanted.

The key thing here is when focusing on “do the processes work the way the business wants” we could focus on business value and not worry as much about infrastructure and technical things using a SaaS approach. We would then worry about the infrastructure during NFT testing where we went to the server version of CRM. A diagram of what this looks like is below.


I hope that this post has given everyone some food for thought that no matter what project you’re working on, there are nearly always some opportunities to take advantage of the cloud to help you deliver a solution within the overall big picture of the project.

This particular project was one which would usually be one of the most cloud-adverse projects but when the organisation is embracing the cloud even with some non cloud friendly constraints you can still use the cloud where it makes sense so you can focus your on-premise investments in the places you really need to.


Buy Me A Coffee