Recently we have begun using the new cost optimization features for Logic Apps in Serverless360. (https://docs.serverless360.com/docs/cost-optimization)
The idea of this feature is that with consumption logic apps you can enable and disable them on a schedule to suit your usage patterns. An example would be if you have logic apps which poll service bus then you are paying trigger costs for each time you check for data. Some other examples that might incur costs that can be optimized include:
- Listening on Event Hub
- Listening on SFTP sites
- Recurrence triggers to poll API’s, storage etc
We have a lot of Logic Apps which use these patterns and we have a number of environments they are deployed to and the aim is to optimize these logic apps to make some decent cost savings. Note we will probably not configure this for Logic Apps with Passive triggers like Request triggers as they dont incur any trigger costs.
Overall Environment Cost
If we take a look at the Serverless360 cost analyzer graph for one of out environments. You can see this is typically costing around $50 per day.
On Friday we setup the new feature and you can see I have managed to drop it down to $15 per day.
This is the overall environment cost graph.
Cost for Logic Apps Consumption
In the analysis graphs for Serverless360 cost analyzer I have also created a graph which looks just at my spend on Logic Apps. I can see that ive made a big reduction from Logic Apps, reducing the cost from around $12 per day to less than $1 per day.
At this point I was wondering why the Logic App cost reduction (which was good) didnt match what I was expecting from the first graph above. At the top level I thought my cost had dropped by $35 per day but the Logic App cost has dropped by around $10 per day. This is one of the challenges on Azure Billing data and how to demistify it. I used a graph looking at my cost by resource type over the last 30 days to see what I was spending on other resources and it looks like my integration account & APIM costs have dropped too. I am thinking that its likely this is just that the Azure API hasnt released all of the billing data yet and this will update later so Ill use my data from the 19th as the initial indicator and keep monitoring it.
How did I setup the monitor
In the optimization section I can create 1 or more cost optimization scheduled like below.
Within the schedule Id associate 1 or more resources (which can be logic apps or other resource types) and you then define your up and down times using the grid shown below.
The aim is to make it easy to setup a schedule and manage resources. You dont need to be modifying scripts or need an advanced devops skillset to maintain this solution. Lets just make it easy.
What you can do with your schedules is setup multiple schedules to suit the pattern of your usage. I will have 1 schedule which just turns some stuff that we dont use very much off most of the time and then other schedules which turn some stuff on during business hours (as shown above).
Based off my data for the 19th I am currently expecting to save around $10 per day for this environment.
I think out of my 5 environments I would expect to see similar savings in 2 other environments where I am turning stuff off most of the time when its not used.
I then have my UAT environment all of my interfaces will be on during the business day to allow testing and some interfaces will be on overnight where needed. Id expect to see lower savings in the UAT environment.
In production I am not going to use this feature at the moment but we do have some interfaces where we might be able to turn them off because they are only used occasionally but the nature of the trigger means they are polling for stuff regularly.
Im guessing my savings will be in the region of:
- Dev + Build Environments = $10 per day each
- System Test = $7 per day
- UAT = $5 per day
- Production = Not using
That means summing that up I am looking to save in the region of $900 per month using this Cost Optimization feature. Thats going to work out to be around $10k per year.
Obviously these are early findings and we have only been using the feature for a couple of days but Ill do a follow up post based once we have been using it a little longer to see how we get on but this looks pretty promising.