Microsoft Flow: Tuning your Flow for performance

|  Posted: November 26, 2018  |  Categories: Azure

A few months ago, I wrote on my blog a post about “Processing Feedback Evaluations (paper) automagically with SmartDocumentor OCR, Microsoft Flow & Power BI” in which I create a Flow that process a speaker evaluation form and place the result on a Power BI dashboard to present in conferences.

Because I created this from a business user perspective, avoiding any custom code, to properly extract the data from the SmartDocumentor original message that is a Key/Value JSON m



I had to create an insane amount of conditions inside a Switch operation.


The picture above is just a little fragment of the conditions inside the Switch operation. Even reducing my browse resolution to the minimum, I cannot have the entire picture of all the conditions inside my Switch Operation.


The result regarding performance is that each speaker evaluation submission took 1 or more minutes to be processed:


You may see in the history that it took one minute. But in reality, you will find the Switch operation had 31 iterations and it took 2 minutes to complete:


Is not bad, but if you are in a big conference and have an insane amount of evaluation to process you may be interested in optimizing your Flow to process faster the evaluation forms.

So, the critical question would be: how can we optimize your flow’s?

Two vital tips to optimize your flow performance are:

  • To avoid at any cost having nested conditions inside a loop (Apply to each) as I have implemented on my Default branch inside the switch operation
  • moreover, avoid at any cost having nested loop operations

I think these two situations are the most painful situations regarding performance.

Solution 1: Improving performance by reducing to 1 quarter of the initial time

So, as I told earlier, the massive amount of conditions, especially on the default branch, inside my Switch Operations was causing a significant impact on the performance of my Flow execution.

If you read my blog post, you will find out that those nested conditions were necessary to retrieve the profile of the attendee (“whoAmI”) that is a multiple option checkbox that appears on the JSON message from 13 to the 29 positions. So, to avoid this situation, I asked my internal SmartDocumentor OCR team to imp

rove a little the incoming JSON message to have another record that includes the attendee profile combination.


Because they were doing the OCR they were able to identify that and by having this small change to my incoming message I was able to replace my nested conditions inside the default branch by a simple Case branch:

Microsoft flow performance

This strategy also enabled me to reduce the number of operations inside my flow. As I no longer need the “Count” variable, so I was able to delete:

  • Set “Count” variable operation
  • and Increase the“Count” variable operation

The result:

Microsoft flow performance

Wow! How a simple change on the input of a message can make a considerable difference of the end performance execution of your flow.

Another essential key learn is to optimize your messages to the proper input and output.

Solution 2: Improving performance to less than X second(s) Flow execution time

It is evident that the time-consuming task is to extract and map the incoming JSON message into the expected JSON message to be delivered to the Power BI action – Add rows to a dataset. This is what we call the messages transformation or mapping in enterprise integration.

To archive the best result regarding performance, I had to leave the Business User approach and enter in my developer mode and remove this transformation logic outside my Flow and implement it inside an Azure Function that:

  • accepts my JSON Key/Value message s ent by SmartDocumentor OCR;
  • and return another JSON message with the required attributes for sending to Power BI;


You may think that out-of-the-box there isn’t an Azure Functions connector and you are correct. Nevertheless, Azure Functions are available by HTTP so you can make use of the HTTP connector to call them:


Then make use of the Parse JSON action to create the tokens to be used on the Power BI action.


The end solution would be like this:


If you try the solution now, you will be amazed by the performance archived:


1-second average! WOW!


Lessons learned

These are some of the vital tips that you need to take into consideration while implementing your Flow’s:

  • Avoid at any cost having nested conditions inside a loop (Apply to each);
  • Avoid at any cost having nested loop operations;
  • Optimize the structure of the input and output messages;
  • Avoid doing message heavy transformations or extractions using standard Flow actions
    • Move this logic to outside the Flow’s, Azure Functions are a good solution

Serverless360 is a one platform tool to operate, manage and monitor Azure Serverless components. It provides efficient tooling that is not and likely to be not available in Azure Portal. Try Serverless360 free for 15 days!


Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.