Process Azure analysis services through Azure data factory & Logic Apps

My first blog was a detailed explanation on how to use Azure data factory and logic apps together to improve your ETL process. Find it here. This blog represented the majority of user traffic so I decided that the time is there to follow up on this blog!

Azure analysis services is often used as an enterprise datamodel. Azure analysis services can be compared with Power BI (premium) as they share the same underlying architecture. Microsoft’s vision is to create a superset of Azure analysis services features within Power BI premium that might influence the choice when selecting where to host your datamodel (Power BI premium / Analaysis services). SQLBI has written a nice blog about choosing analysis services with large datasets. Find it here.

Azure analysis services supports two different method’s of storing data, direct query and import. In a direct query setting data resides in the database and is queried at runtime by the analysis services engine. This requires the standard pricing tier as it is a premium Analysis services feature. The import mode loads the data into the datamodel, this is called processing.

Processing Azure analysis services

There are multiple ways of processing Azure analysis services. Besides manual, automating the model processing can be accomplished via Azure functions, Azure automation, Logic Apps and even from within SSIS running under integration runtime in Azure data factory. Important to know is that it requires a service principle (app registration) or actual user + pass for authentication.

This blog describes how to set up the processing in combination with Azure data factory and Logic Apps.

Getting started

To get started provision an Azure Analysis services instance, Azure data factory instance and prepare a datamodel in visual studio that sources data from, for example an Azure sql database. For this blog I will re-use the datamodel from my previous blog about slicer dependancies.

In a classic Business Intelligence pipeline Azure data factory would function as an orchestrator calling Logic Apps, data flows / SSIS packages and it would take care of error handling. With the purpose to extract data from sources, transforming it and loading it into a datawarehouse and datamodel to enable the business to better base decisions on facts.

In this last step analysis services comes to play as a datamodel that can be used for easy reporting and self-service analytics. Analysis services should be processed with data from the datawarehouse, probably modelled as a starschema.

This blog is about how to process Azure Analysis Services with the use of Logic Apps in combination with Azure data factory.

Initiating the processing

Because Azure data factory is going to trigger this new logic app we are creating a logic app from scratch triggered by a HTTP request.

The second step is to initiate a HTTP request so we can call the Azure analysis services API for refresh. All details about the API can be found here.

The base URL should be altered by replacing the region, servername and modelname. /Refreshes initiates the actual refresh.

The base url: https://[]/servers/[myserver]/models/[AdventureWorks]/refreshes

Fig 1

Figure 1 shows the Logic App configured thus far. Note the changed URL and the body. This is also described in the API details. For now we have configures a full refresh. One very important setting, and super easy to forget, is the change the asynchronous pattern for this request. This setting can be changed by clicking on the … for the HTTP task. Be sure it’s off. The reason we want this is that if we leave this setting on we get an immediate response from the API, and this does not yet contain the location that we need to check for the refresh state.

Authentication can be configured through Active directory OAuth and providing app registration details. Other options are possible but this depends on your approach and policies.

Running this Logic App will trigger Analysis services to start a full refresh. Triggering this Logic App from data factory is as easy as creating a pipeline, dragging a web request component onto the canvas and configuring it with the Logic App url and a sample body as shown in figure 2.

Fig 2

As we are used to with web requests we get an immidiate response only telling us the request was successful but not if the processing succeeded. To accomplish that we need to check the processing state.

Checking processing state

Compared to checking the state for an Azure SQL database checking the state for Analysis services requires the refresh location from the original request. This can be accomplished by returning this location to data factory and then use this in combination with the API to get the current processing state.

First the logic app is extended with a response task as shown in figure 3. Note that if the async pattern in still on this does not yet contain the location url.

Fig 3

Back in Azure Data Factory, create a variable (unselect all components and navigate to Variables) and drag a set variable component onto the canvas. Specify the formula to set the location variable to:
@activity(‘Trigger refresh’).output.Location. So far the pipeline triggers the refresh and stores the returned location into the Location variable.

To check the state we need a different Logic App. This app will fire a HTTP request on the analysis services API based on the location that was returned from the first Logic App. Because we need this location in Logic Apps our HTTP request needs to accept some input. This can be done by providing a body for the request as shown in figure 4.

Fig 4

The next step in the logic app is to parse the received JSON and fire the actual HTTP request to get the current processing state. To parse the JSON add a parse JSON task and use the body as content and provide the schema as shown in figure 5

Fig 5

After parsing the JSON we can now add a HTTP request that uses the parsed location to check the current processing state as shown in figure 6.

Fig 6

The last step is to return this response so we can use it in data factory to keep checking until the state is different than InProgress. The complete logic app including the details for the response is shown in figure 7.

Fig 7

Because this Logic App requires a valid url to check for the processing state we cannot run a test. The finishing touch including testing can be done from data factory.

Putting it all together

In azure data factory extend the pipeline with a until loop and use the following expression as shown in figure 8

@not(equals('InProgress', coalesce(activity('Check processing state').output.status, 'null'))).

It might be smart to configure a reasonable timeout so if there is something going wrong it does not keep polling.

Fig 8

As activity use a web request, set the Method to Post and provide the location variable as part of the body as shown in figure 9. To prevent this until loop from running every second a wait task for 60 seconds is a smart addition.

Fig 9

The pipeline is complete now and can be tested. In the first step the Logic App is triggered that posts a processing command to the analysis services API. It wait’s for a full response that contains the location wich we can use to later check for the current state for this processing command. The logic app will respond with this location.

The next step extracts the location from the response and stores it into a variable. Finally the until loop fires a web request to the Logic App that get’s the current state for this specific processing command. In data factory this process is repeated until the response is not equal to inProgress, so succeeded or failed. Figure 10 shows the full pipeline.

Fig 10

This pipeline can be saved as a “Process analysis services” pipeline and can be called from for example a master pipeline. This way we get small seperate pipelines that can be combined from a master / orchestrating pipeline.


Processing analysis services can be done in diferent ways. I prefer Logic Apps over Azure functions because there is no need for any code thus it’s really transparent and can be re-used for other scenario’s. It took me some time to figger out that the key thing in this set-up is to change the async pattern for the http request that fires the process command but besides that it’s super easy to set up.

Set up dependant slicers with Power BI or Analysis services (tabular)

One of the frequently asked questions by clients working with Power BI and/or analysis services (tabular) is how to get filters or slicers to be dependant on attributes that are not from the same table.

During the evolution of Power BI users got more and better options to set this up. This blog will describe the most common used scenario’s and what the best option is in my opinion.

The case

Consider the datamodel in figure 1. In this case their are 2 dimensions (Product and Store) that are related to a fact table containing Sales data. Both dimensions have a 1-N relationship to the fact table. Notice the arrows on the relationship, both are pointed towards the Sales table. This means that filters applied to both dimensions are pushed through to the Sales table. For example if one product is filtered only the sales for that product is shown. This is the desired results as we want to slice the sales data by describing attributes in the related dimensions.

Fig 1

Looking at the data in the datamodel below we can see that not all products are sold in all stores and that there also is a product that hasn’t been sold anywhere (Foldingbike).

Fig 2

On top of this datamodel a simple report can be created as shown in figure 3. This report page contains two charts that displays sales information by product and store and contains 2 slicers for both ProductName and StoreName.

Fig 3

When one store is selected all products remain visible in the product Slicer but as we can see this store does not sell all products.

Fig 4

For a better end-user experience only products that Bike Amsterdam sells should be visible. Before a solution is implemented it is important to know why this slicer still contains all products. The usual answer I get when I tell people it is because both slicers are not directly related, I am told they are related through the Sales table and that is totally true. The reason why a selected store does not filter the product table is because of how this relationship works. This brings us to the first option.

Option one: changing relationship behaviour

The default 1-N (from Product or Store to Sales with the arrow pointing towards Sales) relationship only pushes filters from the one side to the many side. This is because there is a direct relationship. Selecting one store should filter the sales table down to only that store. But if we want to filter the products after selecting one store, that store filter should be pushed to the products table that does not contain stores. So this store filter must be transformed into a product filter that contains all products that are sold for that one store.

The first and most common used option to get these dependant slicers is by changing the default relationship to a relationship that pushes the applied filters in both directions. This can be done in the relationship view as shown in figure 5.

Fig 5

After changing the Product – Sales relationship to “Both”, under the cross filter direction, the model behaviour changes as follows. When selecting Bike Amsterdam the Sales table is automatically filtered down to only this store. Now this filter is transformed into all products that are sold by Bike Amsterdam and this is pushed to the product table resulting in a situation where the product slicer only shows products that are sold by Bike Amsterdam (shown in figure 6).

Fig 6

This works as expected but this relationship setting should be used with caution. In a data model like this it is safe to use but what if this datamodel is expanded and contains more tables?

Consider the changed datamodel in figure 7. In a situation where selecting both product and store should be dependant on each other both relationships have their cross filter setting changed to both. So selecting a product filters the stores and the other way around.

Fig 7

In the StoreLayout the amount of Square feet per section per store is stored. After adding a new chart to the report we can compare the distribution of space per store (figure 8).

Fig 8

Per store we can see the distribution of space. Now imagine a Power BI report that contains two pages. One page containing sales information per product and store (Figure 3 & 4) and one page that contains the store layout (figure 8).

As a user you are analyzing the sales per product and especially the non-existing sales for the Foldingbike. Because you want to keep track of this product you decide to leave the filter and continue analysis on square feet on the next page.

The chart from figure 8 is now empty as shown in figure 9.

Fig 9

This is due to the fact that the filter direction of the Sales – Store relationship is set to both. And this is the reason that it should be used with caution. I’ll briefly explain.

The slicer on product is set to FoldingBike. The relationship from product to sales (1-N) filters the sales table to only sales on foldingbikes. Because there are none the filter context for the sales table contains no data. Because the relationship between sales and store is set to both the filter on the FoldingBike is transformed into a filter for all stores that sell this bike. Because there are no sales the store table is also filtered down to no records. So there is an empty result for sales and store. Now if we follow the relationship from store to StoreLayout we can see that the filter on the store table (no results) is pushed to the StoreLayout filter (default behaviour from 1-N). Figure 10 shows how relationships are cascaded by changing the cross filter direction. There is no relation between product and storelayout but due to this setting there actually is a relationship.

Fig 10

Option two: custom slicer visual

Another frequently used option to get dependancy on slicers is by using a custom slicer visual. The hierarchyslicer and Smart filter are two visuals that accomplish the desired result. Both work simular and only need the fields to be selected.

Figure 11 shows this slicer with the hierarchyslicer custom visual. In this custom visual the dependancy is shown as a hierarchy that can be collapsed and expanded. It’s important to know that the more dependancy between slicers the deeper the hierarchy gets.

Fig 11

Figure 12 shows this setup with the smart filter custom visual. Slicers are shown seperately but are contained in one visual. Selecting from each slicer is done from a list or dropdown. It has a lot of styling options but some are only in the pro (paid) version.

Fig 12

Option three: default slicers with filter

For a long time Power BI did not support settings filters on slicer visuals. This prevented report creators to base slicers on a subset of data. For example show all attributes besides the Unknown value. This also applies to the case in this blog where all Products should be shown that have sales.

not so long ago this inability to set filters on slicer visuals was fixed. This does not yet work for the new filter pane unfortunately but will probably be implemented soon.

It’s super easy to set up. By selecting the Product slicer look at the filter section as shown in figure 13. By default the attribute that the slicer is based on is shown (ProductName). We are now able to add more filters. This can be all different filter options (Standard, TopN, Relative,..) but also supports measures. By adding the Sum of Amount measure and setting the criteria to “is not blank”. This slicer now only shows ProductNames where the Sum of Amount is not blank.

Fig 13

Looking back at the report (figure 14) we can now see that after selecting Bike Brussel only the product’s that have sales for Bike Brussel remain in the slicer.

Fig 14

This action can be added to the StoreName slicer to get the same functionality in reverse (product filters stores). This third option allows report creators to get dependant slicers without making model changes.


With the ability to set up filters on slicers, report creators can create user friendly dimensions without the need for model changes. All options can be used to accomplish the same result but in my opinion option three is preffered due to it’s simplicity and also that there is no need for custom visuals (that might be disabled in your organisation).

Hidden Gem: Power BI mobile reports

I am not the biggest fan of Power BI dashboards. They do look amazing navigating via mobile phone and are a great way to combine insights into multiple reports from one view but in my opinion they lack certain functionality to be used as a default.

What would make Power BI dashboards great

There are some basic functionalities that would improve the usability for Power BI dashboards in a great way. I’ll briefly summarize my view on these changes.

  • Updatable pinned visuals
    You decide your new report is finished. You pin a visual to a new or existing dashboard and you add the visual to the mobile layout. In the end you notice a filter has been applied during the pin-action. This requires the visual to be pinned to the dashboard and mobile view again. And I don’t mention the resizing and renaming in the dashboard. Change in colours, labels, set slicers etc require pinned items to be re-pinned. This makes the Power BI dashboard way to solid.
  • Global slicers
    Looking at the Power BI dashboard it could be that several visuals have a different granularity in for example years that are shown. To create a single glass to interpretate the data on the same granularity it would be an amazing addition to have something like global slicers that work over reports. So a global slicer for “Year” would filter all visuals to the same period. A good starting point would be to have the Edit interaction option from Power BI desktop
  • Multi-pin in mobile view
    A mobile dashboard view can be styled in a great way by using titles and images. If you want to add images to this mobile view you need to have them in your dashboard and even if you want to use the same image, like a logo, multiple times you need to have that image in your dashboard mulitple times. An option to multi-pin the same visual would be a great addition to the mobile view but also keep the original dashboard cleaner
  • More flexilibity in dashboard layout
    The Power BI dashboard can be improved with having more flexilibity in the dashboard tile layout. Being able to change the width and height without restrictions, default settings for titles, subtitles, font etc and last but not least the ability to add content items that are not shown like images that should only be visible in the mobile view

Eventhough I see a lot of improvements I still fancy the easy and intuitive way a dashboard can be created in very little time. In my day to day job I see a lot of businesses using a default approach to Power BI content. This starts with Power BI desktop, Power BI service (report), Power BI service (dashboard) and finally sharing it with collegues. Probably this has lots to do with dashboard sharing being the only option in the beginnen for Power BI.

User navigation

In a situation where a dashboard is shared with an end-user this person can directly go to to navigate to the shared dashboard. After selecting a visual the user is directed to the underlying report for further analysis. This works fine and the user can get to the report fairly quickly.

If the user navigates to the dashboard via the Power BI mobile App this experience is slightly different. I really fancy this dashboard view because it has lots of space with enough room for clear visuals.

Fig 1

This dashboard view is read-only and after selecting one visual it will open to full screen. The user can see the data labels here or choose to annotate and share his thoughts (Fig 2).

Fig 2

The interactivity on this visual is limited to this visual only. If this users want’s to interact with other visuals it can navigate to the underlying report (Fig 3) by selecting the little barchart in the top right corner.

Fig 3

Eventhough phones are getting bigger and bigger still this report view is not very user friendly. Especially when there are lots of visuals and slicers on the canvas.

Power BI Phone layout

From within Power BI desktop each report page can have a mobile phone layout. This can be selected from the View tab on the ribbon.

Fig 4

By dragging & dropping items from the right panel the mobile view can be created. Yes it’s true that multi-pin should be a great addition here also!

After saving and publishing the report this mobile view can now be accessed by selecting the litle bar chart on the top right from Figure 2.

Fig 5

Now also the report is readable on mobile devices and the user is able to interact between visuals as shown in Figure 6. This improves user experience greatly and data analysis on their mobile phone.

Fig 6

Does this replace the Power BI dashboard?

The Power BI dashboard still has some great features like subscriptions, alerts and multi-report visuals but if you don’t use these this mobile view could replace the Power BI dashboard. You can also directly share a Power BI report without having to create a Power BI dashboard.


I see lots of organizations using the Power BI mobile app without configuring their reports for mobile consumption. It takes a little bit more time to create a mobile view for every report page but it enhances the user experience on mobile greatly!

Dynamic titles in Power BI

Since the April release of Power BI desktop it is possible to use dynamic titles in your visuals. This can be super handy when there is a need to explain more than just a static title. For example a static title would be “Sales over time” and a dynamic variant could be “Sales between jan 2019 and july 2019”.

To get to this option select a visual, navigate to the title menu and select “Conditional formatting” from the three dots dropdown.

In the popup that appears you can select an attribute. This can be either a column or a measure but it must be from a Text data type. If you select a column you must specify the summarization because this dynamic title can only handle a single value. To get started it is important to know what context is applied to the visual and how we can access attributes that will create our dynamic title.

Evaluation context

In DAX there is always a evaluation context. This is the context that affect a DAX expression (calculated column, measure) when it is executed. Most important is that you have an understanding about filter context, row context and context transition. You’ll start exploring these kinds of contexts whenever you build a measure that does not return the value you would expect. To explain these contexts I prefer referring to the best explanation that can be found here. In this blog I will refer mainly to the evaluation and filter context.

Getting started

First lets start with a simple dataset and some visuals.

Fig 1
Fig 2

We’ve got some data for several months and two products. The title is set to “Sales over time” and this should have an indication from the period that data is shown, lets say “Sales from 1-1-2019 until 1-6-2019”.

To do this we need to be able to grab both dates from the dataset. This can be done if we know what the evaluation context for this visual is.
The evaluation context is the environment in which a formula is evaluated and this context changes through slicers, filters, rows and columns in visuals. For this visual there is a filter context at each data point (every date) but if we look at the chart itself we could refer to the title as being the “total” value. There is no filter set by any slicer, filter, row or column value. If you look back at the data table (Fig 1) you can see that there are no rows excluded by matching the values in the visual.

Building the dynamic title

Create a new measure from the field list. To create the dynamic title the below DAX formula would do the trick.

DynamicTitle = 
VAR FirstDt =
    MIN ( Data[Date] )
VAR LastDt =
    MAX ( Data[Date] )
    "Value from " & FirstDt & " until " & LastDt

Because dates are chronologically ordered we can grab the first date by selecting the minimal value and the max date by selecting the maximum value. I prefer to use variables because this enhances readability and has soem benefits in only calculating once.

The final line builds the dynamic title resuling in the desired outcome.

We could enhance the title in combination with the Product slicer. A slicer changes the filter context so whenever one product is selected we could use a MAX() MIN() FIRST() LAST() or any other summarization to select this product the same way as with the date but we could also concatenate the selected products by using SELECTEDVALUE()

Title = 
VAR FirstDt =
    MIN ( Data[Date] )
VAR LastDt =
    MAX ( Data[Date] )
VAR Product =
        CONCATENATEX ( VALUES ( Data[Product] ); Data[Product]; "," )
"Value from " & FirstDt & " to " & LastDt & " for product (" & Product & ")"
Fig3 The new title with product A selected
Fig 4 With both products selected


Dynamic titles are a great way to give more information about what is displayed in a visual. Whenever you start with dynamic titles you should consider when this title becomes a paragraph that creates information overload. The basic idea should be that it helps understanding the visual.

Logic Apps cookbook for Business Intelligence

In my previous blog I wrote about how to use Logic Apps in combination with Azure data factory to improve ETL processes. Read it here. This blog describes how both can be combined and how to check the status for an Azure SQL database or datawarehouse.

Now getting that status is useful both most of the times this is only useful together with an action like scale-up, scale-down, scale-out, pause or resume. This blog can be referred to as a cookbook with several Logic App recipes that can be leveraged from for example Azure data factory to build an awesome automated pipeline!

HTTP Request > 202 (Accepted)

Coming from the field of Business Intelligence you are probably used to running ETL packages that contained tasks that would have a couple status like running, succeeded or failed. What happened here is that this process waited for a final response before continueing to the next task. With HTTP requests this is different. You get an imediate response 202 (if succeeded) that does not hold a final status. It just says, OK the request is fired. So to see wether a logic app succeeded in doing what was asked we can use another logic app that polls for the current status. The combination of these Logic Apps is what forms the basis of this cookbook.

Scaling an Azure SQL Database

The Logic App to accomplish this is shown in the picture above. We start of with a received HTTP request. After saving your Logic App this holds the unique url that will trigger this Logic App from for example Azure data factory.

Because we want to re-use this Logic App for up and downscaling we use a parse JSON component to extract the messages that has been send to this url.

Finally it fires an HTTP request to the database REST API to scale the database.
Now lets go through these steps one by one

When a HTTP request is received

As described we will re-use this Logic App to up- and down-scale so therefor we need to tell this Logic App what ServiceObjective to scale to. In this example we only scale in the same tier (Basic). If you get the idea it’s easy to add other inputs like tier to this Logic App.

Click on the “Use sample payload to generate schema” and enter:


The JSON schema should look like the image.

Parse JSON

Because we want to use this ServiceObjective input in the HTTP Request we parse it to be valid JSON and to be able to use this ServiceObjective directly.

Click in the textfield Content and in the pop-up menu select the Body from
” When a HTTP request is received “. and paste the same JSON schema from the previous component into the schema field.


Choose PUT from the Method dropdown. Put is needed to perform this change to the database. Paste the url where [] are replaced with your own settings.[]/resourceGroups/[]/providers/Microsoft.Sql/servers/[]/databases/[]?api-version=2017-10-01-preview

Paste the code below into the Body field. For now we only made the ServiceObjective variable but this could be extended with tier, location etc.

  "location": "westeurope",
  "sku": {
    "name": "@{body('Parse_JSON')?['ServiceObjective']}",
    "tier": "Standard"

Choose Managed identity from Authentication but remember to add this Logic App’s identity to the SQL Server where this database is hosted. More info about that in my previous blog under Authentication.

Full HTTP reference for PUT can be found at

Running the Logic App

The Logic App is finished now so it’s time to test it. Open or create an Azure data factory, create a pipeline and add a web component to the canvas and paste the url. Select POST as a method because we want to send something to this Logic App en specify the Body from below


These settings would be better off storing them in variables & parameters but this might be content for a later blog.

Now run this pipeline and check out the database in the Azure Portal. It’s scaling right?

Now that our first recipe is ready we can create a lot more in a short time. For example if we would like to know when the database is back online?

Getting current state of an Azure SQL Database

Create a new Logic App or clone the previous. Set up like the image below.

If it is cloned from the previous Logic App be sure to change all method’s to GET (We want to get the status) remove all JSON input & parse and add a response task with status code and body. Also don’t forget to add this identity to the SQL Server.

From data factory create a pipeline with GET as a method, no body and run!

Scale and wait until resumed

A great combination could be to scale a database and wait until it is resumed. With both logic apps this can easily be accomplished through the Azure data factory. The example below uses the first Logic App to scale a SQL Datawarehouse. Yes that’s also just a database that has a tiny difference in the url[]/resourceGroups/[]/providers/Microsoft.Sql/servers/[]/databases/[]?api-version=2014-04-01-preview

Until resumed holds an expression that retrieves the current state of the SQL Datawarehouse and finishes while it equals Online

@equals(‘Online’, coalesce(activity(‘SQL DW State’), ‘null’))

The until loop runs the Logic App for getting the current state and then a 60s wait. While it is not Online this process continues.

Pause / unpause SQL Datawarehouse

Another great recipe is a Logic App that can pause or unpause your Azure SQL Datawarehouse to only run it when needed. Especially for this component it is very handy becuase it’s not the cheapest component.

Again it’s just like the first Logic App and even easier. Use POST as the method and the below url[]/resourceGroups/[]/providers/Microsoft.Sql/servers/[]/databases/pause?api-version=2014-04-01-preview

or te resume:[]/resourceGroups/[]/providers/Microsoft.Sql/servers/[]/databases/resume?api-version=2014-04-01-preview


It is super easy and useful to combine Logic Apps and Azure data factory to dynamically change your solution to it’s needs. Resume and scale when data processing needs to happen and scale down and pause when no data processing is happening.

In one of my next blogs I will write about processing Azure analysis services via Logic Apps.

Using logic apps and Azure data factory to improve your ETL processes.

Logic Apps are great when it comes to performing recurring tasks. There are a bunch of connectors available and it looks like your own creativity is the limit of what you can do with logic apps. Of course when it comes to complex, long running tasks it might be better to look at a different solution.

Another great product is Azure data factory, especially after version 2 was released and a visual GUI made it so much easier to build data driven pipelines.

The combination of these two tools is great and I use them a lot together. Logic apps can be triggered from a http request and in Azure data factory you have this web task available by default. So to run a logic app from Azure data factory you simply add this url to a web task and your good to go.

The example above fires a web request to the url that is automatically created after a logic app is created from a HTTP trigger .

Depending on what the logic app does you need to deal with authentication. For now we leave it this way and start by creating a simple logic app.

New logic app

Go to logic Apps in the Azure Portal, Click Add and specify a name. In this case I will name it “MyLogicApp”.
After submitting your Logic App should be in the list, if not hit refresh. Open your logic app and in the overview page click on ” View in Logic Apps designer “.

This is where the magic starts. Scroll down and find the “Blank Logic App”

Search for “Http” and select the following

From the “Add new parameter” dropdown select the Method and select Get.
Hit Save and look for the url that is now available in the ” HTTP POST URL ” field.

The logic App should look like this now.

If you run this logic app you can see that it succeeds. Of course, there is nothing that can fail 🙂

New Azure data factory pipeline

Go to data factories in the Azure portal. Create a data factory if you have none otherwise open the data factory by selecting it and choose for “Author & Monitor”. This brings you in the development GUI for Azure data factory

Under Factory resources hit the “+” and select New Pipeline

From the activities menu drag a Web task (under General) onto the canvas and give it a name. Select the Web task and go to settings. Paste the logic app url into the URL field and select the Method Get.

This method depends on the method we have selected while creating the logic app. There are some different methods but the most used are GET, POST, PUT and Delete. Some reference to these methodes

Run the pipeline by selecting Debug.

This should result in green, succeeded messages. This is the first step in running web requests from Azure data factory.

A practical example

In a modern data platform architecture it might be interesting to do some up- and downscaling before starting a long running ETL process. In this example we are going to check the current configuration for a Azure SQL database. This might be the first step, after we found out the current service level objective we can create another logic app to up- or downscale this database.

Create a Azure SQL database if there is none available. This can be done under SQL Databases in the Azure portal. If there is no SQL Server this should be created as well but can be done from the database creation wizard.

Navigate back to logic apps and create a new Logic App or go back to the designer interface from the previous Logic App. Start again with a HTTP Request and hit the “+ New step” button. Choose HTTP.

By selecting HTTP you this Logic App is going to do a HTTP request itself to get the state of a SQL Server database.

Fill in the field Method, URl and Authentication.

Change the url parts between [][]/resourceGroups/[]/providers/Microsoft.Sql/servers/[]/databases/[]?api-version=2017-10-01-preview

Full reference on the HTTP Requests can be found @

It’s time to save and run this Logic App. After running you will notice that it failes to get the database status. We did specify Managed Identity as our authentication but we did not give this identity access to our SQL server. Before we go into this part let me quickly explain two types of authentication I normally use.


From the Authentication dropdown a couple of options were available. I am not familair with all options but prefer to use Active directory OAuth or managed identity. With Oauth you need to create a app registration from within Active directory that can be referred to as a service principle with a username (unique identifier) and a password. Together with the tenant-id this account can be used to authenticate on the Azure components you give it permission to.

The managed Identity is not fully available on all Azure components at the moment but is quickly gaining terrain. For each Azure service there is an automatically created identity in Azure active directory that can be used between Azure services without using usernames and password in your code. Reference:

In our Logic App we selected the Managed Identity but did not yet give it permission. Time to do so. First we need to set up this identity for this Logic App because for Logic Apps it is not automatically created in Azure AD. So from our logic app navigate to Identity and toggle the status to on.

Hit Save and navigate to your Azure SQL Server. Navigate to Access control, click Add, select “Contributor” as role and now search for your Logic App. Select it and hit save.

Now go back to your logic app and rerun! This can be done directly from Run trigger in the Overview tab. It succeeds!

At this point we have created a Logic App that fires a HTTP request to our SQL Server database to return it’s current state based on a managed identity authentication. Back to Azure data factory to tie both together and do something with this result.

Combining Logic App and Azure data factory

Navigate back to your data factory. Create a new pipeline and put a web task on the canvas. Specify the Logic App url, Get as it’s method and run the pipeline. As you can see it succeeds but the response is blank.
You can see this response from the output window and selecting the outgoing arrow under Actions. The actual response is:
{ “Response”: “” }

This is because we did not specify an output in our Logic App. This is easy to set-up. Navigate back to your Logic App and open the designer. Add a new task by searching Response.

After selecting the status code field a popup appears. Select the status code from there and do the same twith the body field.

Go back to you Azure data factory pipeline and re-run the pipeline. Check the output, it should look like below.

    "sku": {
        "name": "Basic",
        "tier": "Basic",
        "capacity": 5
    "kind": "v12.0,user",
    "properties": {
        "collation": "SQL_Latin1_General_CP1_CI_AS",
        "maxSizeBytes": 2147483648,
        "status": "Online",
        "databaseId": "",
        "creationDate": "",
        "currentServiceObjectiveName": "Basic",
        "requestedServiceObjectiveName": "Basic",
        "defaultSecondaryLocation": "northeurope",
        "catalogCollation": "SQL_Latin1_General_CP1_CI_AS",
        "zoneRedundant": false,
        "earliestRestoreDate": "",
        "readScale": "Disabled",
        "readReplicaCount": 0,
        "currentSku": {
            "name": "Basic",
            "tier": "Basic",
            "capacity": 5
    "location": "westeurope",
    "id": "",
    "name": "",
    "type": "Microsoft.Sql/servers/databases"

Now add an If task to the canvas and drag the green line from the web task to the If task. Select the If task, go to settings and click the “Add dynamic content” text. Specify this expression: @equals(‘Basic’, coalesce(activity(‘Get database state’).output.sku.tier, ‘null’))
Be aware that the “Get database state” is the name of the web component.

It should look like this now

Now go to Activities and click the button under true. Add a Wait task to the canvas and now run this pipeline. In the output window you can see that the wait task was triggered so now we could add another web request to scale this database to standard or premium.

Like I sad before with enough creativity great processed can be created with Logic Apps and Azure Data factory.

But why Logic Apps?

Everything we’ve done so far with Logic Apps can also be accomplished with only Azure data factory. For example we could have used the web request to the database in our Logic App directly in Azure data factory.

The url is exactly the same as in the Logic App, method is also Get and MSI is selected with the default Azure managed identity url. So why use Logic Apps at all?

If you run processes or tasks that are not initiated from Azure data factory or you have multiple data factories running it is a good idea to put these actions into a Logic App. With this you create a higher level of flexibility and if anything needs to be changed you only have to change the Logic App.

If you only run processes from Azure data factory you could just create a couple pipelines with each a specific task that can be referred from another pipeline by using the “Execute Pipeline” task.

One of my next blogs will be full of examples with web-requests that will improve your ETL process. Start thinking about

  • Up- downscale database / datawarehouse
  • Pause / resume datawarehouse
  • Process Azure analysis services
  • Start / Stop integration runtime


There are a lot of possibilities with web-request that can run from within Azure data factory alone or in combination with Logic Apps to get a more dynamic approach to your ETL proccesses. If you can dynamically scale your whole architecture only when needed this could deliver great performance improvement and lowering your cost when everything is scaled down or paused after execution.