Yesterday, I wrote an article about how we refactored some problematic custom functionality in Dynamics to run on Azure instead and discussed the benefits of this. In this article, I want to discuss how we can take the same solution and extend it again using Serverless components on Azure. But this time in a different way to implement an API.
First off, we need to consider the wider solution. In the first article, we focused on the invoice/order processing requirements and architecture we implemented. The below diagram represents this.
In this article, we need to consider the wider solution. The below diagram shows how we receive invoices/orders from B2B partners. They come in through a traditional approach which is supported by many of our partners where batch files are received. We have implemented a BizTalk solution which will receive the batches via SFTP and it will then debatch and process the files and load data into Dynamics.
We then have the invoice/order processing that we discussed previously and finally, we have a BizTalk solution which takes completed orders/invoices and sends them to our line of business systems.
While this works well, we are finding that some of our partners are now moving forward with their technology capabilities and we also have new partners who already have newer technology platforms. The traditional batch-based approach doesn’t always work well for all partners. For those that are used to batches, it is great but for others, we need to do something different. The type of business means that we tend to offer multi-channel approaches for partners to integrate with us. This makes us easy to work with.
Therefore, we decided to explore the options to implement a modern looking API to give our partners an alternative option and to gain wider adoption of our business solution.
Also, we felt that the Azure platform would have a lot of services to help us with this and we wanted to explore Serverless options again. We envisioned our API would look like the below diagram.
We would proxy our API with Azure API Management which we already used anyway, which would give us an excellent developer experience for our partners to make it easy for them to integrate with us. A well-documented and easy to use API is key to having success in this area. The API we would implement would offer 2 services. The first would allow the partner to submit an API. The 2nd would allow the partner to check the status of the API.
At this stage, we knew that the processing of the invoice/order would be asynchronous and if we were to get spikes in load where we receive a lot of invoices/orders then we would load level and queue up requests, process them at a rate that would protect the system. The partner will submit a message and get an Id. They could then come back later to check the status of their submission. We would also support options to be able to do a call back if the partner wanted it.
If we take a look at how the API is build to support invoice submission, then we can see that below in the diagram.
- The partner will submit a message to Azure API management
- APIM will forward the message to an Azure Function which will act as the API backend
- The function will record receiving the message to an Azure table and the status will be maintained here
- The function will also write the message to a blob in Azure Storage. This will allow support for large messages
- The function will then write a message to Azure Service Bus queue to allow the received message to be processed asynchronously later. At this point, the call will be returned to the partner
- Behind the scenes, a Logic App will pick up the message from Service Bus
- The Logic App will get the payload from blob Storage
- The Logic App will call a function to insert the message into Dynamics and then update the table to indicate that the message is now in Dynamics
Processing the Order/Invoice
Once the message is loaded into Dynamics, the existing invoice processing engine on Azure will process the received data just like in the previous article. The difference, however, is when you publish the notifications to Service Bus. At various points in the processing, they publish to a topic. Allowing us to route the message to multiple queues using pub/sub. From here we can implement Logic Apps to process the messages and update the Azure Table to record the current status of the invoice/order.
This process will update the table to indicate that validation failed, or the processing was successful. The below picture shows how this works.
- The invoice processing engine published update status events to Service Bus Topics. We can route these to queues for our new API solution
- The Logic Apps can update the table with status changes
Partner gets status
The next part of the implementation needs us to allow the partner to check the status of their submitted message. This implementation was easy. The partner will submit the ID they get when originally submitting the invoice/order. This will go through APIM to an Azure Function. The function would look up the status in the Azure table and return the status.
Implementing this particular bit in an Azure Function was really simple and will perform well. Table storage has superb latency and the function is good too. There should be very little overhead in this query.
In the previous section, we discussed the partner checking the status of their submission. The requirement for this is because we have an asynchronous process in the background. Which allows us to do several good things architecturally. This will make the solution work well. But the tradeoff is that the partner submitting to an API will not get an immediate response. Sometimes this is difficult for businesses to understand, the transition from RPC style to asynchronous architectures is not an insignificant leap in thinking for the non-technical people involved in your project. The tradeoff, however, has many benefits and can save you a significant amount of money by not having to scale for peak load.
While the long polling approach can be a bit of a pain, we can also easily support a callback mechanism. In the below diagram you can see how we implemented this.
- The invoice processing engine publishes events line we discussed earlier
- When the Logic App processes the event if the partner has a call back URL registered with us we can send them a message to tell them the message is done
The good thing, in this case, is the durability of the Logic App. This means we have some replay support and other good things making the call back quite reliable.
From the partners perspective, implementing and offering up these secure HTTP endpoints is a lot easier today than it used to be. An example is that they could just give us an Azure Function URL to call back to. They could then pop the message on a queue or something and process it in the background.
Where’s the value
I hope that in this solution you can see from an architecture perspective how easy it is to implement the solution. There are a couple of Serverless services, mainly Logic Apps and Functions. The solution is simple, however, when we have implemented this asynchronous approach we will be able to accept a super high volume of messages inbound from partners as the function, APIM and queue/storage will handle the very significant load. We then process in the background at an acceptable rate for Dynamics and out solution. This is a big deal as our solution scales and adoption grows.
The result is a high performing reliable API which will give good experiences to the partner.
In terms of productivity, I can give a good example of the difference that Serverless offers. In the real world, I was able to go from zero to having this API solution built and functional and able to process 10,000 messages in our system test environment in approximately 2 days. That includes good things like automated build and automated deployment for all test environments. Now there’s a statement of Serverless allowing you to focus on the solution not getting a side track with non-functional and technical problems.
Hopefully, this will inspire others to consider the Serverless technologies on Azure when looking to take advantage of new opportunities. In addition to all of the benefits, the fail fast approach we take means at any point if the solution did not work we could just bin it all and we have hardly spent any money.
In recent months, I happened to explore Serverless360 – a SaaS tool for Azure Serverless monitoring and management. After 90 days of testing this tool and putting it into use, I felt it quite interesting and would like to share my learnings with you.[adrotate banner=”11″]