Try for free Book a demo

Extending Dynamics CRM Functionality with Serverless Components on Azure

Microsoft Azure

8 Mins Read

Dynamics-365-Azure

Recently when I was at CloudBurst in Sweden, I remember one of the talks was discussing the different types of solution people build with Serverless components.  One of the things we use quite often is the scenario to extend an application’s core functionality with complementary functionality developed using Serverless components. I thought I’d talk a little about one of the occasions where I have done this.

In this instance, we had developed an order processing system in Dynamics 365.  When orders/invoices were loaded into the system we needed to validate them, process them against customers contracts and then forward them to our line of business systems.  Originally the solution was built using the standard out of the box functionality in CRM and the standard CRM extension points such as custom plugins. We found that while this solution sort of worked, it had a few problems.

Problem like:

  • The logic was split across many components making it difficult to manage and maintain
    • Custom plugins
    • Workflow processes
    • Custom workflows
    • Business rules
  • There was not really a way to pause processing easily
  • Performance problems
  • There were reliability problems when plugins got errors

While there are lots of custom extensibility points for solutions built on the Dynamics platform, there are rules and constraints you need to think about in their use.  You need to color within the lines so to speak otherwise you can have problems.  When you start to cross the lines with your requirements, the good thing is that the Azure platform is your friend and can help you a lot. With this problem, we decided to explore how to use Serverless components on Azure to tackle this problem in a way that would be more complimentary to our solution.

In this case, I decided to use an Azure Function as the Serverless component which would help me to implement this solution. I would use the Dynamics feature which would publish changes to an entity to Service Bus and then from Service Bus I would use the function to process the event and then it would update the entity as required.

State Machine

First up we had to think about the logic we needed to process.  In this case, we could implement a state machine style pattern to help us receive events from the entity and then we would work out what to do next then implement some logic to perform the actions required when the entity is in that state.

If you look at the below diagram it will give you a flavor of what the different state was and what should happen. Note that this is simplified for purposes of the article.

State Machine

Understanding the Sequence Flow

At this point, I hope its clear that we have events published from Dynamics to a queue and we have a function that handles those events.  Within the function we implement a state machine to work out what’s next then we get a command object that the function will execute to process the next steps.  The below diagram will show you the approximate flow that is expected from when the record is inserted.  Eventually, in the end, the record will reach a state where events for it will just be ignored.

Sequence flow

The key is that by breaking it up into steps it means that the process can change and become more complex without it becoming unmanageable.  We also have points where the state change is persisted, and we will allow the business to wind back the processing in error scenarios where they can support repair and resubmit scenarios which will make the invoice be processed again.

Logical Implementation

Looking at the logical implementation diagram below you can see that the entity changes are picked up and processed by Dynamics System Jobs.  This is an asynchronous way of publishing the events to Service Bus and comes as a configurable out of the box feature in Dynamics.  Once the events are on Service Bus the function can use the Service Bus binding trigger to execute the function.

Within the function, the state machine and business logic will be executed making changes to the entity back in Dynamics. 

Azure Dynamics CRM

One thing we need to watch for in this implementation is the importance of getting the state machine right.  If we were to get the logic incorrect then we keep updating the CRM entity, then we could end up in a recursive loop which could cost a lot of money in a consumption model.  Let’s just make sure we get that bit right!

Design Decisions

When we consider an approach like this there are a few key design decisions we made, and I think there is a lot of value in discussing those.  Some thoughts are:

Keep it simple

One of our architecture principles is to keep it simple.  Unfortunately, in the real world the business logic you need to execute, doesn’t always make it easy to keep things simple.  In our case, the processing logic and the business logic in each stage is very complicated.  I think the above logical architecture on paper looks very simple and easy to understand but the benefits of the function are we could use some well written C# code to manage and encapsulate the complexity of the actual work the function does.

Using C# in this case also allows us to extensively unit test the logic and to develop Specflow behavior driven tests, so we have excellent testing and documentation of the behavior of the component.

I think that when you consider what is involved in our case we have managed to keep it simple despite the level of business complexity in this area.

Why didn’t you use a Logic App

There would have been ways I could have used a Logic App in this solution that would have worked perfectly well and for some customers combining Logic Apps and Functions could do a good job in a similar scenario.  I think in our case the key point was that the complexity of the business logic and processing logic suggested that we needed to extensively unit test the component.  I feel that Functions makes it much easier to do this to the level that was required and we would end up with fewer problems longer term.

At the same time, Logic Apps does offer some out of the box diagnostics capabilities which we don’t get with Functions and we did sacrifice some useful things.

In our case, we also had 1 input and 1 output, so this makes the choice between a logic app and a function more difficult.  If there were many inputs/outputs I’d be more likely to look towards logic apps being involved in the solution but in this case, the ability to do extensive testing and the ability to make it easier to change the code in a more agile way were key decision factors.

I guess one other point to note, using the Dynamics SDK with early-bound objects in the function made the code a lot simpler to do complex things than the Dynamics connectors in a Logic App which have some challenges.  The choice for us would have been either all in functions or a combo of functions plus logic apps.

Where is the Value?

Hopefully, at this point, you can see that we were able to take a struggling Dynamics solution which was difficult to manage and maintain. It was also performing poorly and was not very resilient. We moved it to run on the Azure platform where the constraints were different, and we were able to make a solution which worked better and had a lower cost of ownership.

The Serverless platform on Azure gave us several services which we could use to create the modified solution.  We choose to use Azure Functions for some of the reasons I discussed above.  In this case, the provisioning the services we needed was a click exercise which takes away a lot of the infrastructure questions which you may previously have had to deal with.  We know that the Serverless platform comes out of the box with a level of performance, security, resilience, etc which we can trust, and it allows us to focus on solving the business and application problems rather than spending lots of time on infrastructure challenges.

We now have a component which works on a consumption plan.  This is very good for the spikey load pattern we have now.  Sometimes we may go hours with no invoices and then we might get lots in a short time.  Maybe in the future, we will get more evenly distributed load and at that point, it may be more cost-effective to consider changing to use a per note per hour compute model which would give us a consistent cost but for the foreseeable future the consumption plan will grow as our solution does.

For those organizations who are investing heavily in Dynamics, I hope this article shows how you really should be thinking architecturally about your platform being Dynamics + Azure.  This will give you a great platform to develop a very good range of solutions.  Within Azure, if you are also open to newer approaches then the Serverless platform can help you have a massive shift from solving technical problems to focusing your project team on solving the applications requirements and business problems.  The productivity increase from this can be significant.

In an upcoming article, I will talk about how we used Turbo360 to monitor and manage this solution.

This article was published on Oct 23, 2018.

Related Articles