Another release and more exciting features coming your way from Serverless360 team to cover more ground or cloud. With this release, we are bringing you the capability of management and monitoring of Azure Logic Apps and Web endpoints.
In this blog, we will be looking at Data Monitoring which is one of the most popular features of BizTalk360 due to its invaluable capabilities for modern cloud and hybrid enterprises. Together with multiple customer feedback about broader monitoring capabilities, we decided that it is the perfect candidate for Serverless360 in our next release. Right now, monitoring within Serverless360 is focused on the state of your namespaces — we allow you to monitor if everything is in the expected state and the overall number of messages in a specified state. However, there is always some kind of expectations to the volume of data and SLAs that your solutions must meet.
Data Monitoring will help you to verify if you are processing the right number of messages in the specified time window to ensure that you meet your SLAs. In the current preview release, we allow you to monitor historical data using all available Logic App metrics. An example of such monitoring would be a daily check if within the last 24 hours the Logic App did not have more than 2 failed executions; otherwise, send a warning or an error report to a specified email or a notification channel.
Creating New Data Monitoring Configuration
Adding new data monitoring configuration is very simple, but first, you must have a successfully authorized your Service Principal and associated Service Principal and added some Logic Apps. Read here for the documentation. If you have not done that yet, you can read all about it from the links and then come back to configure your data monitor.
First, select the Logic Apps from your home dashboard and then go to alarms using the left-hand side menu to create data monitoring alarm. The reason for creating a new alarm is that Data Monitoring has its own type of alarm.
Click “Create” button and give your alarm a meaningful name. Then go to the 3rd blade (Data Monitoring Alarm) using the next button and enable this alarm for Data Monitor mapping. The 2nd toggle is used in case you want to receive an alert when everything is correct and you have received the right amount of data. However, if you want to get only the down alerts without up alerts, then disable the 2nd toggle. If you cannot create any more alarms due to licensing restriction, just edit one of your existing alarms and enable the same settings.
When you have your alarm ready, you can navigate to Data Monitoring from the left-hand side menu. The configuration blade should automatically open. Alternatively, you can open it manually by clicking the “Create” button on the top right-hand side.
On the first blade, you will be required to fill in the basic details of your configuration:
- Choose Alarm – The drop-down will include all the alarms which have data monitoring enabled. You are required to choose the alarm that will be used as a container for your configuration.
- Friendly Monitor Name – This text box can be filled with any kind of a name/description that will help you understand what this configuration does e.g. DailyFailureRateValidation. This clearly explains to the user that this configuration will verify daily if there were no failures.
- Choose Logic App – This drop-down will be populated with all the logic apps that are currently associated with Serverless360
- Choose Metric – Here you can choose one of the available metrics which can be monitored using Serverless360 Data Monitoring capability
- Warning and Error Thresholds – Here, you need to supply threshold values which will be used to validate against the actual count returned for the chosen metric.
With the basic details filled in, you can proceed to the next page where you will be able to specify your schedule. This is important to be filled-in correctly, as the schedule impacts how much data will be evaluated against your threshold values. If you choose to run the schedule “Every 15 min”, the data monitoring will calculate the next run time from 12:00 am until the current time and check what will be the next run time. The amount of data that will be taken into consideration will be the exact amount of time backward. For example, if your next run time is set at 10:15 and the schedule is to run every 15 minutes, the data monitoring mechanism will fetch the count of data for specified metric between 10:00 and 10:15. Then it will compare against the threshold values and save the results with one of 3 states: Error, Warning or Success. On the other hand, if your schedule is set to run daily at 1:00 pm, it will collect data from the previous day at the same time till the current day at 1:00 pm.
In the current release, we allow you to set up a schedule to run every x number of minutes or hours, every x amount of days at a specified time or every weekday (Monday to Friday) at a specified time, weekly at specified day and time.
The last option in this configuration is when should the data monitor stop checking. You can select “Never” in which case it will continue checking until you manually delete it. However, you can choose a specific date when you want to end it or after a certain number of occurrences. Keep in mind that if you choose to end, the configuration will be permanently deleted once the condition evaluates to true. However, this behavior is planned to be changed in one of the future releases and instead, it will disable the configuration instead of deleting the configuration.
Once you save the configuration, you will be able to see it on the list of all the other Logic App data monitoring configurations that will include details such as name, schedule description and next run time.
To check the past results, you can navigate to the data monitoring dashboard which will bring up a calendar and a schedule view that will display all the execution results. Inside the calendar, days which had any kind of executions will be marked with either red, yellow or green circle depending on the states of the executions. Once you select a specific day which you want to inspect, the schedule on the right will populate with all the executions. Again, executions on the schedule will be marked appropriately to their state by matching the color.
By clicking on the specific execution, you will open a blade with the actual results and the configuration of that specific data monitor.
We hope this feature will serve you well to set up data monitoring of your Logic Apps.