← Return To Home

Fully Managed Data Analytic Service Optimized for Large volume of Streaming Data

Tags:  
Azure

This blog will explain the Azure Data Explorer, its core concepts, and a few of its significant features. As the data generated by enterprises is growing day by day, data strategists are looking for more effective techniques to handle a massive volume of diverse data. With all these requirements, there emerges a need for a service that can collect, store, and analyze data with optimal performance. Let us see how Azure Data Explorer addresses these requirements.

Azure Data Explorer is a fast, highly scalable, and fully managed data analytics service for real-time analysis. It is better suited for analyzing large volumes of diverse data from different data sources, such as websites, applications, IoT devices, and more. This data gets used for diagnostics, monitoring, reporting, machine learning, and additional analytics capabilities.

Azure Data Explorer makes it simple to ingest the data and enables performing complex ad hoc queries on the data in seconds.

Now let us see how an Azure Data Explorer works.

Azure Data Explorer is a distributed database running on a cluster of compute nodes in Microsoft Azure. It is based on relational database management systems (RDBMS), supporting entities such as databases, tables, functions, and columns. It supports complex analytics query operators, such as calculated columns, searching and filtering of rows, group by-aggregates, and joins.

At the top-level (cluster), there is a collection of databases, and each database contains a collection of tables and stored functions. Each table defines a schema. Unlike a typical RDBMS, there are no constraints like key uniqueness, primary and foreign key here. The necessary relationships get established at the query time.

Below three steps explain the working of Azure Data Explorer

Database creation – Create a cluster and then create one or more databases in that cluster.

Data ingestion – Load data into the database to run queries against it

Query execution – Execute queries and visualize data. It’s available in the Azure portal and as a stand-alone application. Utilize SDKs to achieve the same.

Now let us see some of the core concepts of Azure Data Explorer.

Data Ingestion

The data management service in Azure Data Explorer is responsible for data ingestion. It is the process of loading data from one or more sources to a table in Azure Data Explorer. The ingested data is available for querying.

Data Ingestion Process

  • Azure Data Explorer pulls data from an external source and reads requests from a pending Azure queue.
  • The data is batched or streamed to the Data Manager.
  • Batch data flowing to the same database, and the table gets optimized for ingestion throughput.
  • The data gets persisted in storage according to the retention policy set.
  • The Data Manager then commits the ingested data to the engine, where it’s available for query.

Azure Data Explorer supports several ingestion methods: ingestion tools, connectors, plugins to various services, managed pipelines, programmatic ingestion through SDKs, and direct access to ingestion. 

Permissions to Ingest Data

The process requires ‘Database Ingestor’ level permission to ingest data. Other actions, such as querying, may need database admin, database user, or table admin permissions.

Interacting with Azure Data Explorer

Kusto query language is the primary means of interaction with Azure Data Explorer. KQL allows sending data queries and uses control commands to manage entities, find metadata, and perform other operations. Querying requires database admin, database user, or table admin permissions. The render operator in KQL offers various visualizations such as tables, pie charts, and bar charts representing query results.

In Azure Data Explorer, a query is a read-only request to process the data and return the processing results without modifying the data or metadata. Kusto queries can use the SQL language or the Kusto query language.

Logs
| where Level == "Critical"
| count

 The request is stated in plain text, using a data-flow model designed to make the syntax easy to read, author, and automate. The query uses schema entities that get organized in a hierarchy like SQL.

The query consists of a sequence of query statements, delimited by a semicolon (;), with at least one statement being a tabular expression statement, which produces the query results. The syntax of the tabular expression statement has tabular data flow from one tabular query operator to another, starting with the data source and then flowing through a set of data transformation operators that are bound together using the pipe (|) delimiter.

Control commands

Control commands are requests to Kusto to process and modify the data or metadata. The following control command creates a new Kusto table with two columns, Id and Name.

.create table Logs (Id:string, Name:string)

Control commands differ from queries by having their first character in the command beginning with the (.) dot. This distinction prevents many kinds of security attacks simply because it prevents embedding control commands inside queries. Not all control commands modify data or metadata.

Data visualization

Azure Data Explorer provides a web experience that enables connecting to Azure Data Explorer clusters, writing, running, and sharing Kusto Query Language commands and queries. The web experience is available in the Azure portal and as a stand-alone web application, the Azure Data Explorer Web UI. The Azure Data Explorer Web UI can also get hosted by other web portals in an HTML iframe.

Data visualization and reporting is a critical step in the data analytics process. Azure Data Explorer integrates with various visualization tools that enable visualizing the data and sharing the results across. This data can get transformed into actionable insights which can make an impact on the business.

Significance of Azure Data Explorer

Fully managed data service

Azure Data Explorer service is a PaaS offering that emphasizes focus on the data instead of the infrastructure. The powerful and fully managed data analytics service automatically scales to meet the ever-changing market requirements. Cost can get managed by paying only for the requirements, without any upfront and termination costs. It is globally available and highly scalable.

Time-series analysis

Azure Data Explorer allows creating and analyzing thousands of time series in seconds with near-real-time monitoring solutions and workflows. It includes native support for the creation, manipulation, and analysis of multiple time series.

Fast read-only query with high concurrency

Retrieve ultra-fast results from petabytes of data with optimal performance. Queries can get executed on diverse data sets comprising unstructured (audios, videos), semi-structured (XML, JSON), or structured (numbers, dates, strings) data.

Low-latency ingestion

Azure Data Explorer scales to terabytes of data in minutes.  It supports diverse ingestion methods from devices, applications, servers, and services for specific use cases. It also allows fast, low-latency ingestion with linear scaling, which supports up to 200 MB of data per second per node.

Custom solutions with built-in analytics

Using this PaaS offering, user can build a custom solution with inbuilt interactive analytics. It supports REST API, MS-TDS, and Azure Resource Manager service endpoints, and several client libraries.

Cost-effective queries and storage

User can achieve cost management in both queries and storage. It allows executing as many queries on the database without incurring any additional costs. Get the best of a persistent database to add data to the table automatically, but with the flexibility to choose a retention policy based on how long the data is stored.

Summary

As discussed above, the data generated by enterprises is growing day by day, and there is a need for more effective techniques to handle a massive volume of diverse data. This blog has uncovered the significance of the Azure Data Explorer, which can solve those business requirements.