Hidden Gem: Power BI mobile reports

I am not the biggest fan of Power BI dashboards. They do look amazing navigating via mobile phone and are a great way to combine insights into multiple reports from one view but in my opinion they lack certain functionality to be used as a default.

What would make Power BI dashboards great

There are some basic functionalities that would improve the usability for Power BI dashboards in a great way. I’ll briefly summarize my view on these changes.

  • Updatable pinned visuals
    You decide your new report is finished. You pin a visual to a new or existing dashboard and you add the visual to the mobile layout. In the end you notice a filter has been applied during the pin-action. This requires the visual to be pinned to the dashboard and mobile view again. And I don’t mention the resizing and renaming in the dashboard. Change in colours, labels, set slicers etc require pinned items to be re-pinned. This makes the Power BI dashboard way to solid.
  • Global slicers
    Looking at the Power BI dashboard it could be that several visuals have a different granularity in for example years that are shown. To create a single glass to interpretate the data on the same granularity it would be an amazing addition to have something like global slicers that work over reports. So a global slicer for “Year” would filter all visuals to the same period. A good starting point would be to have the Edit interaction option from Power BI desktop
  • Multi-pin in mobile view
    A mobile dashboard view can be styled in a great way by using titles and images. If you want to add images to this mobile view you need to have them in your dashboard and even if you want to use the same image, like a logo, multiple times you need to have that image in your dashboard mulitple times. An option to multi-pin the same visual would be a great addition to the mobile view but also keep the original dashboard cleaner
  • More flexilibity in dashboard layout
    The Power BI dashboard can be improved with having more flexilibity in the dashboard tile layout. Being able to change the width and height without restrictions, default settings for titles, subtitles, font etc and last but not least the ability to add content items that are not shown like images that should only be visible in the mobile view

Eventhough I see a lot of improvements I still fancy the easy and intuitive way a dashboard can be created in very little time. In my day to day job I see a lot of businesses using a default approach to Power BI content. This starts with Power BI desktop, Power BI service (report), Power BI service (dashboard) and finally sharing it with collegues. Probably this has lots to do with dashboard sharing being the only option in the beginnen for Power BI.

User navigation

In a situation where a dashboard is shared with an end-user this person can directly go to powerbi.com to navigate to the shared dashboard. After selecting a visual the user is directed to the underlying report for further analysis. This works fine and the user can get to the report fairly quickly.

If the user navigates to the dashboard via the Power BI mobile App this experience is slightly different. I really fancy this dashboard view because it has lots of space with enough room for clear visuals.

Fig 1

This dashboard view is read-only and after selecting one visual it will open to full screen. The user can see the data labels here or choose to annotate and share his thoughts (Fig 2).

Fig 2

The interactivity on this visual is limited to this visual only. If this users want’s to interact with other visuals it can navigate to the underlying report (Fig 3) by selecting the little barchart in the top right corner.

Fig 3

Eventhough phones are getting bigger and bigger still this report view is not very user friendly. Especially when there are lots of visuals and slicers on the canvas.

Power BI Phone layout

From within Power BI desktop each report page can have a mobile phone layout. This can be selected from the View tab on the ribbon.

Fig 4

By dragging & dropping items from the right panel the mobile view can be created. Yes it’s true that multi-pin should be a great addition here also!

After saving and publishing the report this mobile view can now be accessed by selecting the litle bar chart on the top right from Figure 2.

Fig 5

Now also the report is readable on mobile devices and the user is able to interact between visuals as shown in Figure 6. This improves user experience greatly and data analysis on their mobile phone.

Fig 6

Does this replace the Power BI dashboard?

The Power BI dashboard still has some great features like subscriptions, alerts and multi-report visuals but if you don’t use these this mobile view could replace the Power BI dashboard. You can also directly share a Power BI report without having to create a Power BI dashboard.

Conclusion

I see lots of organizations using the Power BI mobile app without configuring their reports for mobile consumption. It takes a little bit more time to create a mobile view for every report page but it enhances the user experience on mobile greatly!

Dynamic titles in Power BI

Since the April release of Power BI desktop it is possible to use dynamic titles in your visuals. This can be super handy when there is a need to explain more than just a static title. For example a static title would be “Sales over time” and a dynamic variant could be “Sales between jan 2019 and july 2019”.

To get to this option select a visual, navigate to the title menu and select “Conditional formatting” from the three dots dropdown.

In the popup that appears you can select an attribute. This can be either a column or a measure but it must be from a Text data type. If you select a column you must specify the summarization because this dynamic title can only handle a single value. To get started it is important to know what context is applied to the visual and how we can access attributes that will create our dynamic title.

Evaluation context

In DAX there is always a evaluation context. This is the context that affect a DAX expression (calculated column, measure) when it is executed. Most important is that you have an understanding about filter context, row context and context transition. You’ll start exploring these kinds of contexts whenever you build a measure that does not return the value you would expect. To explain these contexts I prefer referring to the best explanation that can be found here. In this blog I will refer mainly to the evaluation and filter context.

Getting started

First lets start with a simple dataset and some visuals.

Fig 1
Fig 2

We’ve got some data for several months and two products. The title is set to “Sales over time” and this should have an indication from the period that data is shown, lets say “Sales from 1-1-2019 until 1-6-2019”.

To do this we need to be able to grab both dates from the dataset. This can be done if we know what the evaluation context for this visual is.
The evaluation context is the environment in which a formula is evaluated and this context changes through slicers, filters, rows and columns in visuals. For this visual there is a filter context at each data point (every date) but if we look at the chart itself we could refer to the title as being the “total” value. There is no filter set by any slicer, filter, row or column value. If you look back at the data table (Fig 1) you can see that there are no rows excluded by matching the values in the visual.

Building the dynamic title

Create a new measure from the field list. To create the dynamic title the below DAX formula would do the trick.

DynamicTitle = 
VAR FirstDt =
    MIN ( Data[Date] )
VAR LastDt =
    MAX ( Data[Date] )
RETURN
    "Value from " & FirstDt & " until " & LastDt

Because dates are chronologically ordered we can grab the first date by selecting the minimal value and the max date by selecting the maximum value. I prefer to use variables because this enhances readability and has soem benefits in only calculating once.

The final line builds the dynamic title resuling in the desired outcome.

We could enhance the title in combination with the Product slicer. A slicer changes the filter context so whenever one product is selected we could use a MAX() MIN() FIRST() LAST() or any other summarization to select this product the same way as with the date but we could also concatenate the selected products by using SELECTEDVALUE()

Title = 
VAR FirstDt =
    MIN ( Data[Date] )
VAR LastDt =
    MAX ( Data[Date] )
VAR Product =
    SELECTEDVALUE (
        Data[Product];
        CONCATENATEX ( VALUES ( Data[Product] ); Data[Product]; "," )
    )
RETURN
"Value from " & FirstDt & " to " & LastDt & " for product (" & Product & ")"
Fig3 The new title with product A selected
Fig 4 With both products selected

Conclusion

Dynamic titles are a great way to give more information about what is displayed in a visual. Whenever you start with dynamic titles you should consider when this title becomes a paragraph that creates information overload. The basic idea should be that it helps understanding the visual.

Logic Apps cookbook for Business Intelligence

In my previous blog I wrote about how to use Logic Apps in combination with Azure data factory to improve ETL processes. Read it here. This blog describes how both can be combined and how to check the status for an Azure SQL database or datawarehouse.

Now getting that status is useful both most of the times this is only useful together with an action like scale-up, scale-down, scale-out, pause or resume. This blog can be referred to as a cookbook with several Logic App recipes that can be leveraged from for example Azure data factory to build an awesome automated pipeline!

HTTP Request > 202 (Accepted)

Coming from the field of Business Intelligence you are probably used to running ETL packages that contained tasks that would have a couple status like running, succeeded or failed. What happened here is that this process waited for a final response before continueing to the next task. With HTTP requests this is different. You get an imediate response 202 (if succeeded) that does not hold a final status. It just says, OK the request is fired. So to see wether a logic app succeeded in doing what was asked we can use another logic app that polls for the current status. The combination of these Logic Apps is what forms the basis of this cookbook.

Scaling an Azure SQL Database

The Logic App to accomplish this is shown in the picture above. We start of with a received HTTP request. After saving your Logic App this holds the unique url that will trigger this Logic App from for example Azure data factory.

Because we want to re-use this Logic App for up and downscaling we use a parse JSON component to extract the messages that has been send to this url.

Finally it fires an HTTP request to the database REST API to scale the database.
Now lets go through these steps one by one

When a HTTP request is received

As described we will re-use this Logic App to up- and down-scale so therefor we need to tell this Logic App what ServiceObjective to scale to. In this example we only scale in the same tier (Basic). If you get the idea it’s easy to add other inputs like tier to this Logic App.

Click on the “Use sample payload to generate schema” and enter:

{
“ServiceObjective”:”s3″
}

The JSON schema should look like the image.

Parse JSON

Because we want to use this ServiceObjective input in the HTTP Request we parse it to be valid JSON and to be able to use this ServiceObjective directly.

Click in the textfield Content and in the pop-up menu select the Body from
” When a HTTP request is received “. and paste the same JSON schema from the previous component into the schema field.

HTTP

Choose PUT from the Method dropdown. Put is needed to perform this change to the database. Paste the url where [] are replaced with your own settings.
https://management.azure.com/subscriptions/[]/resourceGroups/[]/providers/Microsoft.Sql/servers/[]/databases/[]?api-version=2017-10-01-preview

Paste the code below into the Body field. For now we only made the ServiceObjective variable but this could be extended with tier, location etc.

{
  "location": "westeurope",
  "sku": {
    "name": "@{body('Parse_JSON')?['ServiceObjective']}",
    "tier": "Standard"
  }
}

Choose Managed identity from Authentication but remember to add this Logic App’s identity to the SQL Server where this database is hosted. More info about that in my previous blog under Authentication. https://jordifrijters.nl/2019/05/31/using-logic-apps-and-azure-data-factory-to-improve-your-etl-processes/

Full HTTP reference for PUT can be found at https://docs.microsoft.com/en-us/rest/api/sql/databases/createorupdate

Running the Logic App

The Logic App is finished now so it’s time to test it. Open or create an Azure data factory, create a pipeline and add a web component to the canvas and paste the url. Select POST as a method because we want to send something to this Logic App en specify the Body from below

{
"ServiceObjective":"s3"
} 

These settings would be better off storing them in variables & parameters but this might be content for a later blog.

Now run this pipeline and check out the database in the Azure Portal. It’s scaling right?

Now that our first recipe is ready we can create a lot more in a short time. For example if we would like to know when the database is back online?

Getting current state of an Azure SQL Database

Create a new Logic App or clone the previous. Set up like the image below.

If it is cloned from the previous Logic App be sure to change all method’s to GET (We want to get the status) remove all JSON input & parse and add a response task with status code and body. Also don’t forget to add this identity to the SQL Server.

From data factory create a pipeline with GET as a method, no body and run!

Scale and wait until resumed

A great combination could be to scale a database and wait until it is resumed. With both logic apps this can easily be accomplished through the Azure data factory. The example below uses the first Logic App to scale a SQL Datawarehouse. Yes that’s also just a database that has a tiny difference in the url https://management.azure.com/subscriptions/[]/resourceGroups/[]/providers/Microsoft.Sql/servers/[]/databases/[]?api-version=2014-04-01-preview

Until resumed holds an expression that retrieves the current state of the SQL Datawarehouse and finishes while it equals Online

@equals(‘Online’, coalesce(activity(‘SQL DW State’).output.properties.status, ‘null’))

The until loop runs the Logic App for getting the current state and then a 60s wait. While it is not Online this process continues.

Pause / unpause SQL Datawarehouse

Another great recipe is a Logic App that can pause or unpause your Azure SQL Datawarehouse to only run it when needed. Especially for this component it is very handy becuase it’s not the cheapest component.

Again it’s just like the first Logic App and even easier. Use POST as the method and the below url

https://management.azure.com/subscriptions/[]/resourceGroups/[]/providers/Microsoft.Sql/servers/[]/databases/pause?api-version=2014-04-01-preview

or te resume:
https://management.azure.com/subscriptions/[]/resourceGroups/[]/providers/Microsoft.Sql/servers/[]/databases/resume?api-version=2014-04-01-preview

Conclusion

It is super easy and useful to combine Logic Apps and Azure data factory to dynamically change your solution to it’s needs. Resume and scale when data processing needs to happen and scale down and pause when no data processing is happening.

In one of my next blogs I will write about processing Azure analysis services via Logic Apps.

Using logic apps and Azure data factory to improve your ETL processes.

Logic Apps are great when it comes to performing recurring tasks. There are a bunch of connectors available and it looks like your own creativity is the limit of what you can do with logic apps. Of course when it comes to complex, long running tasks it might be better to look at a different solution.

Another great product is Azure data factory, especially after version 2 was released and a visual GUI made it so much easier to build data driven pipelines.

The combination of these two tools is great and I use them a lot together. Logic apps can be triggered from a http request and in Azure data factory you have this web task available by default. So to run a logic app from Azure data factory you simply add this url to a web task and your good to go.

The example above fires a web request to the url that is automatically created after a logic app is created from a HTTP trigger .

Depending on what the logic app does you need to deal with authentication. For now we leave it this way and start by creating a simple logic app.

New logic app

Go to logic Apps in the Azure Portal, Click Add and specify a name. In this case I will name it “MyLogicApp”.
After submitting your Logic App should be in the list, if not hit refresh. Open your logic app and in the overview page click on ” View in Logic Apps designer “.

This is where the magic starts. Scroll down and find the “Blank Logic App”

Search for “Http” and select the following

From the “Add new parameter” dropdown select the Method and select Get.
Hit Save and look for the url that is now available in the ” HTTP POST URL ” field.

The logic App should look like this now.

If you run this logic app you can see that it succeeds. Of course, there is nothing that can fail 🙂

New Azure data factory pipeline

Go to data factories in the Azure portal. Create a data factory if you have none otherwise open the data factory by selecting it and choose for “Author & Monitor”. This brings you in the development GUI for Azure data factory

Under Factory resources hit the “+” and select New Pipeline

From the activities menu drag a Web task (under General) onto the canvas and give it a name. Select the Web task and go to settings. Paste the logic app url into the URL field and select the Method Get.


This method depends on the method we have selected while creating the logic app. There are some different methods but the most used are GET, POST, PUT and Delete. Some reference to these methodes https://www.w3schools.com/tags/ref_httpmethods.asp

Run the pipeline by selecting Debug.

This should result in green, succeeded messages. This is the first step in running web requests from Azure data factory.

A practical example

In a modern data platform architecture it might be interesting to do some up- and downscaling before starting a long running ETL process. In this example we are going to check the current configuration for a Azure SQL database. This might be the first step, after we found out the current service level objective we can create another logic app to up- or downscale this database.

Create a Azure SQL database if there is none available. This can be done under SQL Databases in the Azure portal. If there is no SQL Server this should be created as well but can be done from the database creation wizard.

Navigate back to logic apps and create a new Logic App or go back to the designer interface from the previous Logic App. Start again with a HTTP Request and hit the “+ New step” button. Choose HTTP.

By selecting HTTP you this Logic App is going to do a HTTP request itself to get the state of a SQL Server database.

Fill in the field Method, URl and Authentication.


Change the url parts between []
https://management.azure.com/subscriptions/[]/resourceGroups/[]/providers/Microsoft.Sql/servers/[]/databases/[]?api-version=2017-10-01-preview

Full reference on the HTTP Requests can be found @ https://docs.microsoft.com/en-us/rest/api/sql/databases

It’s time to save and run this Logic App. After running you will notice that it failes to get the database status. We did specify Managed Identity as our authentication but we did not give this identity access to our SQL server. Before we go into this part let me quickly explain two types of authentication I normally use.

Authentication

From the Authentication dropdown a couple of options were available. I am not familair with all options but prefer to use Active directory OAuth or managed identity. With Oauth you need to create a app registration from within Active directory that can be referred to as a service principle with a username (unique identifier) and a password. Together with the tenant-id this account can be used to authenticate on the Azure components you give it permission to.

The managed Identity is not fully available on all Azure components at the moment but is quickly gaining terrain. For each Azure service there is an automatically created identity in Azure active directory that can be used between Azure services without using usernames and password in your code. Reference: https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/services-support-managed-identities

In our Logic App we selected the Managed Identity but did not yet give it permission. Time to do so. First we need to set up this identity for this Logic App because for Logic Apps it is not automatically created in Azure AD. So from our logic app navigate to Identity and toggle the status to on.

Hit Save and navigate to your Azure SQL Server. Navigate to Access control, click Add, select “Contributor” as role and now search for your Logic App. Select it and hit save.

Now go back to your logic app and rerun! This can be done directly from Run trigger in the Overview tab. It succeeds!

At this point we have created a Logic App that fires a HTTP request to our SQL Server database to return it’s current state based on a managed identity authentication. Back to Azure data factory to tie both together and do something with this result.

Combining Logic App and Azure data factory

Navigate back to your data factory. Create a new pipeline and put a web task on the canvas. Specify the Logic App url, Get as it’s method and run the pipeline. As you can see it succeeds but the response is blank.
You can see this response from the output window and selecting the outgoing arrow under Actions. The actual response is:
{ “Response”: “” }

This is because we did not specify an output in our Logic App. This is easy to set-up. Navigate back to your Logic App and open the designer. Add a new task by searching Response.

After selecting the status code field a popup appears. Select the status code from there and do the same twith the body field.

Go back to you Azure data factory pipeline and re-run the pipeline. Check the output, it should look like below.

{
    "sku": {
        "name": "Basic",
        "tier": "Basic",
        "capacity": 5
    },
    "kind": "v12.0,user",
    "properties": {
        "collation": "SQL_Latin1_General_CP1_CI_AS",
        "maxSizeBytes": 2147483648,
        "status": "Online",
        "databaseId": "",
        "creationDate": "",
        "currentServiceObjectiveName": "Basic",
        "requestedServiceObjectiveName": "Basic",
        "defaultSecondaryLocation": "northeurope",
        "catalogCollation": "SQL_Latin1_General_CP1_CI_AS",
        "zoneRedundant": false,
        "earliestRestoreDate": "",
        "readScale": "Disabled",
        "readReplicaCount": 0,
        "currentSku": {
            "name": "Basic",
            "tier": "Basic",
            "capacity": 5
        }
    },
    "location": "westeurope",
    "id": "",
    "name": "",
    "type": "Microsoft.Sql/servers/databases"
}

Now add an If task to the canvas and drag the green line from the web task to the If task. Select the If task, go to settings and click the “Add dynamic content” text. Specify this expression: @equals(‘Basic’, coalesce(activity(‘Get database state’).output.sku.tier, ‘null’))
Be aware that the “Get database state” is the name of the web component.

It should look like this now

Now go to Activities and click the button under true. Add a Wait task to the canvas and now run this pipeline. In the output window you can see that the wait task was triggered so now we could add another web request to scale this database to standard or premium.

Like I sad before with enough creativity great processed can be created with Logic Apps and Azure Data factory.

But why Logic Apps?

Everything we’ve done so far with Logic Apps can also be accomplished with only Azure data factory. For example we could have used the web request to the database in our Logic App directly in Azure data factory.

The url is exactly the same as in the Logic App, method is also Get and MSI is selected with the default Azure managed identity url. So why use Logic Apps at all?

If you run processes or tasks that are not initiated from Azure data factory or you have multiple data factories running it is a good idea to put these actions into a Logic App. With this you create a higher level of flexibility and if anything needs to be changed you only have to change the Logic App.

If you only run processes from Azure data factory you could just create a couple pipelines with each a specific task that can be referred from another pipeline by using the “Execute Pipeline” task.

One of my next blogs will be full of examples with web-requests that will improve your ETL process. Start thinking about

  • Up- downscale database / datawarehouse
  • Pause / resume datawarehouse
  • Process Azure analysis services
  • Start / Stop integration runtime

Conclusion

There are a lot of possibilities with web-request that can run from within Azure data factory alone or in combination with Logic Apps to get a more dynamic approach to your ETL proccesses. If you can dynamically scale your whole architecture only when needed this could deliver great performance improvement and lowering your cost when everything is scaled down or paused after execution.