Tenant Analysis
part of Measure Killer for Power BI and Microsoft Fabric
Tenant Analysis is the biggest mode of Measure Killer, it can scan a whole Power BI / Fabric tenant in a short period of time.
​
You will be able to use these results to improve the governance and administration of your tenant. Additionally you can improve the performance of your capacities and thus lowering costs.
​
In our tenant we have 1,700 reports, models and dataflows which only takes 20 minutes.
​
What do we mean by an entire tenant?
All Premium / Fabric, Premium-Per-User, Pro and Personal workspaces.
​​
What does scanning mean?
We will get the lineage and metadata of all semantic models, Power BI reports and Dataflows (Gen. 1 and 2)
​
What do I get?​
​
1) Initially (after a few seconds) you get this:
-A summary of your tenant (# of workspaces, reports, models dataflows etc.)
-The content and some metadata of all workspaces (Power BI & Fabric items)
-A list of all data sources in your tenant with their downstream lineage.
-A list of all semantic models and dataflows with their upstream and downstream lineage.
-Permissions to all workspaces and all individual Power BI & Fabric items.
-Permissions per user / group.
​
2) Once you run the metadata scan (which will take minutes/hours) you will get the following:
-A list of all semantic models with unique insights (size, # of calc cols, refresh info, best practices)​​​​
-A list of all Power BI reports (custom visual search, # of pages, # of visuals, report-level measures)
-A list of all dataflows (detailed information on refresh, # of queries, contact person)
-All DAX expressions for all semantic models (easy search and export as .json)
-All M expressions of all semantic models and dataflows (easy search and export as .json)
​​
3) Once you run activity logs (separate function):
-All Analyze in Excel activities on Power BI semantic models (user, which model, when etc.)
-All Power BI report views by user in the last 30 days (easy export as .json)​​​
These are the requirements for running all features of Tenant Analysis:
-Measure Killer Basic or Enterprise subscription
-You need to either have Fabric administrator or Power Platform administrator roles in Azure
-Enhance admin APIs responses with DAX and mashup expressions needs to be turned on
-Enhance admin APIs responses with detailed metadata needs to be on
-XMLA read needs to be on
-Dataset Execute Queries REST API needs to be turned on
​
Let us now look at all the features and the workflow in more detail.
​
When you open Tenant Analysis, you are triggered to authenticate, after that you will see this window.
Especially for larger tenants we might not want to scan everything since it is not relevant or takes a lot of time. In the screenshot below you can see our pre-selection for which part of our Tenant we want to scan.
​
​
We can scan individual capacities in various regions, include or exclude Pro and Personal workspaces.​
If we want to scan everything we can just "Select all".
Since our tenant is very small (in terms of number of workspaces) I also included Personal workspaces.
​
Once I hit next, the main window of Tenant Analysis will pop up and show me the parts listed in section 1 above.
​
Now we can already see our Tenant Summary at the bottom left, where we have the number of Premium, Pro and Personal workspaces, data sources and so on.
We also show the number of Fabric items tenant-wide.
When I select the Workspaces tab on top, I will see the following:
Obviously this is still very basic, but it can help you understand which artifacts resides in every workspace.
​
Let us go to the Lineage tab next. Now it is getting more interesting, I will get a full lineage on every dataflow and semantic model in my tenant.
​
In the example below I have actually searched for "weat" to find the weather dataset which I am interested in. You can see that in the first level in the hierarchy it is showing me which dataflows this semantic model is using, the downstream semantic models (composite models) as well as the Paginated and Power BI reports connected to it.
When you open the next level you can see which data sources the dataflow is using and so on.
​
Remember that everything in Tenant Analysis can be exported (as .json or saving as a .measurekiller file for someone else to import back into the tool - e.g. someone without admin privileges).
Next we will check out the lineage of our data sources.
This can be particularly interesting to see the impact of any changes or removals of any databases, sharepoint folders and so on. I am curious about a web API and will take a deeper look into its lineage across our tenant.
Above we can see all models, reports and dataflows that are somehow connected to this weather api.
If we are interested in the lineage of a database and see not only the server but also views/tables, we need to take a look at the M expressions, this will come later (scroll down) in this post.
In the screenshot above we can see the Access tab. We can choose to either look at the access by user/group, workspace, or by each artifact (models, reports etc.)
I have chosen the user or group level and selected pedro@... we can see the number of workspaces, models etc. he has access to. If I expand each item I will see exactly which items those are.
​
Remember that everything in Tenant Analysis can be exported (as .json).
Next we will pull the metadata of all models and reports, this will also activate the tabs of the tool that are still greyed out.
For this we need to go back to the Selection window and decide which workspaces (or maybe all) we want to run this advanced scan on. I am choosing all.
​
It will take approximately 10min for every 1,000 reports and models you have in your tenant.
The speed depends on your internet connection, if you have a large share of Pro workspaces (will slow it down drastically) and the size of your capacity. How fast your computer is also plays a role.
In the first window (pre-filters) we saw an option for multi-threading, this will drastically speed up the analysis. Make sure to set it at or below the suggested amount. The higher this value is the less resources your computer will have for other tasks.
​
Remember that we only analyze and interact with metadata so if you have models with billions of rows it does not play a role. If you have hundreds of pages in your reports with lots of custom visuals though, it will take longer to run.
​
​
Generally it is a good idea to start running the advanced ​scan right when we open Tenant Analysis since it will take time. We can explore the various tabs (workspaces, lineage, access etc.) while the main scan is running in the background.
​
After you click on Run you might see this window pop up, in order for the tool to get all the metadata needed, it reqquires access to those workspaces. Using your admin privileges it will grant you temporary access to these workspace (as Contributor) and offer you to remove them again once the scan has finished.
When the advanced scan (to get all metadata of all models, dataflows and reports) has finished, it automatically opens the Semantic models tab.
As you can see below (please enlarge the image), you get a list of all semantic models that have been analyzed.
-You will get the name and location (workspace) of that model
-The size of the model
-# of calculated columns
-# of reports connected
-When the last refresh was and if it was successful or not
-Contact person (last modified by)
-Average refresh duration
-Refresh schedule
-Storage mode
-Best practices score (ranging from Power BI Pro -> Power BI Criminal)
You can click on the expand icon of any model to see the more details like:
-Tables and columns
-Relationships
-Measures
-Connected reports (both Power BI and Paginated reports)
-Local date tables (hopefully not but if they exist in the model they will show as in the screenshot below)
You can search for local date tables tenant-wide if you click on "Search all levels" (like in the screenshot above) and then put in "local" or "local date" in the search field. Additionally, you can see the unnecessary storage those "auto date/time" fields use up, which is often an issue.
​
One to one or many to many relationships are also highlighted in yellow by default.
Next we will have a separate tab for all Dataflows (Gen. 1 and Gen. 2) showing us mostly information on refresh times.
-You will get the name, type and location (workspace) of that dataflow
-The date and time of the last refresh and if the refresh was successful or not
-# of queries / tables
-The contact person or owner (last modified by)
-Average refresh duration
-Refresh schedule
​
Additionally you can expand the queries (first column, see screenshot above) and see which entities/queries exist in the flow and its columns. If you want to see the M code please jump to that section.
The next tab of Measure Killer Tenant Analysis features all Power BI reports we analyzed (by pulling their metadata).
In the screenshot above (you can zoom in) you can see that it is showing me some interesting information:
-# of visuals in a report
-# of custom visuals
-# of report-level measures
-# of pages
-Workspace name (in which the report resides)
-Workspace type (Premium, Pro etc.)
-Name of the dataset the report is connected to
-How many views the report got per day (this is a separate function you need to run)
-Total views
-Contact person (last modified by)
If you expand a report you can go down through the hierarchy to the pages and visuals. You can nicely see the distribution of visuals per page like this.
For each report we can drill down to see the pages, standard visuals and custom visuals.
This gives you the incredible feature of searching for custom visuals all across your Power BI tenant.
​
A custom visual has a blue stamp if it is certified, the "Is used" shows if a visual is actually used in a report meaning someone has actually built this visual on a page. "False" means the visual was imported into the report but nothing has been built with it.
​
Like all information in Tenant Analysis, this can also be exported as a .json file.
Next are all DAX expressions in our tenant. This can be millions (or who knows maybe even billions).
We will get them from all semantic models and Power BI reports (report-level measures) we ran the Advanced scan on.
The list includes measures, report-level measures, calculated columns and calculated tables.
The search feature can help you find specific functions or variables you are using in your DAX tenant-wide. As you can see in the screenshot above, we expose the following columns:
-Artifact name / DAX expression (when expanded - see screenshot below)
-Table name (it belongs to)
-Type (Measure, calculated table, calculated column, report-level measure, local date table (hopefully not))
-Dataset or report (for report-level measures) the DAX expression belongs to
-Source type (either semantic model or report)
-Workspace (name)
-Workspace type (Premium or Pro)
​
Like everything in Tenant Analysis, all DAX expressions can also be nicely exported as a .json file.
On the top right you can also see a function for finding Duplicate DAX expressions.
Initially, as we can see on the screenshot below, I get duplicates within one semantic model.
​
How does this work?
We parse out any comments, whitespace etc. and then check if the expression is the same.
This function will get more sophisticated over time to e.g. recognize that
CALCULATE ( SUM ( column1 ) ) = SUM ( column1)
is actually a duplicate - which it does not recognize currently.
​
If we click on "Find duplicates tenant wide" it is getting more interesting but you really need to be careful here. This can take a long time for larger tenants with hundreds or thousands of models. For 100 models it took my laptop 2 seconds to run the function and for 1,000 models about one minute but still, this needs to be used with extreme caution since it could freeze the application or potentially even crash it. Make sure to save your file as .measurekiller before to avoid any trouble.
The output is quite amazing though, I can see that multiple developers across a couple of models have been building measures with this expression: [AC]-[BU].
​
Obviously I can also do my own analysis since I can export all DAX expressions as a .json file as well.
The next tab shows us all M expressions in our tenant. This can be a lot of text obviously since it will take the M code of all queries (loaded and unloaded) from all semantic models and dataflows (Gen.1 and Gen.2)
​
This opens up a huge number of possibilities like searching for a database schema, table or view. I can see exactly which models and flows are impacted and for which queries this table/view has been used.
​
In the screenshot below I was searching for the DateTime.LocalNow() function which can have major impacts in different time zones (Power BI Service is in UTC).
Obviously I can export all M expressions as a nice .json file like everything in Tenant Analysis.
​
Generally we recommend exploring lineage in the Lineage tab. But this only gets you so far since it does not expose any tables or views but only the server- and database names.
​
For more granular impact (if we want to change a table in our data warehouse or rename something) this M expression search is pure gold.
In Tenant Analysis and the Tenant Admin Mode (mode 4) of Measure Killer we have a function to pull Power BI activity logs.
This provides information on report consumption (views) as well as users doing Analyze in Excel on top of our semantic models.​​
As you can see in the screenshot above we get a list of reports (when expanded) with their number of views (see reports section for more details).
​
We can only fetch the last 30 days but if you do this regularly Measure Killer will build up a database (stored on your machine) to keep all the history. You can see in the tab on the right side for which months and days you have those logs. The database is located in the %APPDATA% -> "Measure Killer" folder called "MK".
​
We can also find out who is creating or refreshing an Excel file (with a live connection) to our Power BI semantic models. You have the option to copy the email addresses of the Excel users to get in touch with them.
This is especially interesting in modes 3 and 4 of Measure Killer where we optimize a semantic model and remove all unused columns and mesures. Since we do not want to take anything out that someone might need in an Excel file on a local machine, it is important to get in touch with them.
I have mentioned this a few times in the various sections of this page but everything that you see in the UI in Tenant Analysis can be exported (the export actually contains much more than what is visible in the user interface of Measure Killer Tenant Analysis.
​
We have the export section that features the following exports as .json files:​​
Full export is everything from all tabs, this can be Gigabytes of metadata for larger tenants.
​
Full export without DAX and M codes is still producing large results since you will have all columns of all models, report pages, data sources and so on.
​
Custom visuals provides a list of all custom visuals in the reports you analyzed with report name, workspace, name of the custom visual, if it is certified or not etc.
​
​DAX expressions are all measures, calculated columns and calculated tables from all semantic models and report-level measures.​
​
M expressions are all loaded and unloaded queries from all semantic models and dataflows (Gen. 1 and Gen. 2)
​
Refresh events shows us detailed information on refreshesd for all semantic models and dataflows (Gen. 1 and Gen. 2).
​
Analyze in Excel gives us a list of all models and respective users having a live connection to a Power BI semantic model from their local Excel files.
​
Report views ​gives us detailed information about which user viewed which report and when, for further analyses.
​
User access rights show us who has access to what, which users or groups have workspace-level or item-level (models, reports, fabric items etc.) permissions.
It also shows what type of permissions those are.
​
The export to .json is one thing but what if I want to share the results of Tenant Analysis with a colleague since I do not have all the time to go through everything myself?
​
For this we built the save and load feature that is available in all modes of Measure Killer.
The tool will generate a .measurekiller file that someone else can import back into the tool.
This person now does not need to run anything or no permissions are checked, it will import the whole UI however so the experience is the same as the person who actually ran the scan.
​
For Tenant Analysis the file can get very big (multiple GBs in some cases).
When you first launch Measurekiller there is a button called Load a file at the left bottom, this allows you to import a .measurekiller file.
​
It will automatically jump to the mode the other person has saved.