In the current feature set available on Azure one of the most common patterns for implementing an API is to use a combination of Azure Functions to act as the back end of the API where the bulk of the work is done and then to use Azure API Management (APIM) in front of the function as an API proxy to add additional security, management and operational features to make your API better and improve the usability without you having to do much work manually.

A typical view of what this might look like is in the below picture.

The main reasons this kind of pattern is popular is that its simple to use and get up and running. There is a strong serverless element to it and the operational costs are not that high. Its likely for the average company of new company to this pattern you are using the standard edition of APIM and the consumption (pay as you go) functions plan. If you implement this pattern then your likely to use the APIM authentication features to implement which ever style is appropriate for your users then you will probably flow an authentication header to the backend function if required and in the APIM policy you will probably inject the function key as an HTTP header so that the APIM can authenticate against the function.

While this is easy to get up and running there are a few vulnerabilities to be aware of in the solution which you may want to do something about depending upon your attitude to risk. The main ones are:

  • The functions are by default publically accessible so it would be possible to bypass the APIM and call the functions directly
  • The functions by default are callable over http and https which you can look to lock down
  • Some of the headers returned from APIM and Functions will reveal some info about the implementation technology which will usually flag un on a security review/test

To be fair in my experience of implementing the solution, many of the issues are flagged as low or medium risk by the tools that security testers use, but a lot of things can be done with a small amount of work which will tick a lot of these boxes so it makes sense to mitigate these risks. The challenge though is that there are a number of ways you can do certain things depending on your preferred technologies and how much money you want to spend. I thought it would be interesting to discuss some of these.

Option 1 – Simple ways to harden the solution with minimum change

If we prefer to keep the solution pretty simple and use as many of the PaaS and Serverless type features on Azure as possible then we can make the following changes:

  • Turn on HTTPS only on Azure Functions

By default the Azure Functions are callable over both HTTP and HTTPS. This will flag up with your security testing tools. In Azure Functions there is an option to turn off support for HTTP so you can only use HTTPS. This is a good idea to secure the transport between the APIM proxy and Function.

  • IP Restriction on Azure Functions

Your APIM instance will have a static public ip address which requests will come from when forwarded via the proxy. This means that you can add an additional hardening step in the Azure Function settings where you can add an ip restriction. Once enabled this is a whitelist of addresses which are allowed to access the function. This is an easy way to lock down the communication between the APIM proxy and the function so that to access the function (as a minimum) you will need the function key and also come from the ip address of your APIM proxy.

  • Use APIM policy to remove HTTP headers

The usual security testing tools will identify that the Azure Functions plan has injected some headers like what IIS might do. The headers which are there by default are X-ASPnet-Version and X-Powered-By. Security testing tools don’t like these headers as they tell an attacker about the technology behind your API which might help them attack it.

In the Azure Function we don’t have many options to add/remove headers that are handled out of the box, but we are using APIM in front of our function which is a prefect place to use the APIM policy to remove these headers which it would be desirable to remove.

  • Use APIM policy to add HTTP headers

Following some of the guidance from security tools and OWASP one of the headers that it recommends to add if your using HTTPs is the struct transport security header. There is some more info on this header on the following link: This header will help to mitigate man in the middle attacks. To implement this we can easily use APIM policy to inject the header in the response to our API calls.

One point to note on this header is that the documentation out there is a little unclear about how this header is handled in an API scenario and most guidance focuses towards the browser. Im not 100% sure how a back end .net code for example would handle things compared to a browser accessing the url. It not that the .net code wouldn’t work I think its more about how the browser may use browser features to be able to make sure any links returned are kept in HTTPS. That said I think if its likely your API could be being accessed from javascript in the browser or just for future proofing its probably a good idea to add this header anyway.

At this point with option 1 I have now taken my cloud PaaS implemented API which was simple and without adding any additional cost I have made a few modifications to the settings in Azure which will make my API less vulnerable. The key points here are that I have removed the Azure Functions from being accessible by bypassing APIM and Ive restricted to HTTPS only as well as ticking a few boxes on some of the common lower risk vulnerabilities.

Option 2 – Locking things down with Networking Level Security

As I mentioned earlier there are other ways that you can look to harden the solution. One way might be to introduce cloud based networking to the architecture and then you can use the features of cloud networks to lock down aspects of the solution. One of the biggest advantages of this approach is that many of the people you are likely to liase with when reviewing the security of your solution will tend to come from an infrastructure background rather than a development background. I have found in practice the PaaS or serverless aspects of a solution tend to be areas you need to spend more time explaining the technology and how it is different. If you add a virtual network to the solution then it introduces an infrastructure element to the solution which can be locked down in certain ways which people in your team may be more familiar or more comfortable with.

In the below picture you can see how the original solution has been modified to include a virtual network (VNet) in Azure.

The key changes here are that APIM has been upgraded to the premium SKU which allows it to be connected to a Virtual Network and the Azure Functions have been changed from running on a consumption plan to running on an App Service Plan which is scaled to the SKU that also allows it to be connected to a Virtual Network.

At this point we can then use Network Security Groups (NSG) to lock down access between resources and subnets so that we can make the functions only visible from within the VNet. This is similar to the IP Restriction we added in option 1 but it can be managed at network level with NSG’s in a way that is more familiar.

In addition to this we would still also implement the following changes just like in option 1:

  • Add HTTP headers with APIM policy
  • Remove certain headers with APIM policy

The key difference here is the way which we have removed the public line of sight to bypass the APIM component and access the functions directly.

You could probably argue that option 2 is more secure than option 1 because of the number of additional elements in the solution protecting those functions but at the same time it is important to note that there is a significant cost implication too because the APIM SKU will need to be premium where as in the example above we were using standard. To connect to the VNet you need premium. Also the App Service Plan hosting the functions is changed from a consumption model to a node based model so you will be paying per hour for the size and number of nodes you use and will need to manage scaling which is different to the consumption model where Microsoft take care of this for us.

Option 3 – Locking Down Cipher Suites

One of the common tests you will do for security when looking at your API or website will be to test the SSL setup using a tool like In this solution architecture you are likely to test both the APIM and Function endpoints with this tool (before you lock down visibility to the functions). When looking at the SSL setup some interesting background info is on the below links:

One of the things that might flag up that you need to consider is supported cipher suites for SSL. When we ran the tool above both our functions and APIM both got A ratings but you can see in the detail of the report (or other reports ran from the OWASP tools) that Azure is offering support for some weaker cipher suites at the bottom end of the preference list. I would expect that this is to allow the platform to support applications which are built for backwards compatibility with software/browsers which are not the most up to date. In many ways this makes a lot of sense, Microsoft’s position would be to support things which are recognised to be good, and not support obvious thinks which are bad but that leaves a small bit of grey area where some of the more stringent recommendations would prefer you to modify the supported ciphers if this was your own server so you have mitigated another vulnerability but from Microsoft’s position they want a platform for all so they may need this for backwards compatibility or other reasons. At this point you have a couple of choices. The first and easiest choice is to decide that you will do nothing. You may decide that the flagged up weak ciphers are completely acceptable for your solution and choose to let Microsoft look after this for you. At some future point Microsoft will probably update the supported ciphers and in true platform as a service style you will inherit this update.

If however you were particularly concerned about the supported ciphers and wanted to modify the list to harden your solution even more then you can bring into your solution the Azure Application Gateway, This feature of Azure offers something called the Web Application Firewall which introduces a Layer 7 load balancer to the solution and in the gateway you have the option to modify cipher policies as described in the article below.

This option is probably something a company would do if they had already spent the money locking down with a VNet like in Option 2. They are probably trying to mitigate all of the risks they have identified in security testing. This would mean their solution would now look like the one below.

In the solution, the Application Gateway is in front of the APIM component. This means that it will modify the SSL setup slightly as seen from the consumer. With everything being in a VNet this now means NSG’s are controlling how things are accessed. I’d view this solution as being aimed towards a company that is happy to spend a significant amount of additional money on their cloud application security testing and solutions to harden it as much as possible. They have significantly increased the complexity of the solution too which they would have to accept as a side effect and there would need to be a good investment in training of their support people.

Now one thing to note, it is also possible that the solution could be implemented using the Application Gateway to restrict the cipher policy while the rest of the components are not in the VNet. The Application Gateway has to live in the VNet but it can point to resources outside of a VNet. In this case you would end up with a solution like the below:

While this is a technically feasible architecture, I think in practice the type of company who would choose the option 1 type solution where they are maximising the Serverless and PaaS elements of the solution would be more likely to choose to allow Microsoft to manage cipher suites than to take on the extra complexity in the solution themselves. I cant see many companies choosing this approach.


I think its great that out of the box with Azure we can deliver low cost solutions which are pretty solid and secure. Even better is the fact that there are many features which can also be used to harden your solution, many of which are free but some of which can also cost a lot. The question for many companies is about where you want to position yourself on the spectrum of how much time and money you want to invest in hardening your solution balanced against making the solution more complicated. Obviously everyone wants the most secure solution possible but also wants the cheapest solution that does the job well.

I hope that by talking through some of the different things you can do with a solution like the above gives a good illustration of the architecture choices you need to think about and some thinking to help you come up with the right position for your company.


Buy Me A Coffee