Power BI – Group Workspace and Content Pack – Dashboards Reports and Datasets

There are three basic types of content or building blocks in Power BI:

Datasets, Reports and Dashboards

Datasets are reporting models with attributes and measures (calculations) that a user can EXPLORE and build different types of visualizations. Datasets can also be RENAMED, DELETED and REFRESHED

Reports are collections of visuals.

Dashboards are a collection of tiles that are pinned from Reports by mouse over the desired chart and clicking on the Push Pin Icon. Tiles can be rearranged and resized.

Where do we store Dashboards, Reports and Datasets?

You will have two options for storing your content:

  • My Workspace
  • Group Workspaces

So how do I organize everything in Power BI?

Any content that will need governance or will potentially reach a large number of users should be stored in a group workspace. Content that does not require much collaboration and governance can be stored in your personal workspace. Another thing to consider is that a personal workspace is limited to 10GB in the Pro version. So is each group workspace. However, every new group gets another 10GB of storage which makes a group concept even more alluring.

OK, now that I have organized everything, how do I share it?

If you have a dashboard stored in your personal workspace, you can use a Share Dashboard feature.

Only dashboards can be shared (reports and datasets cannot).

Share Dashboard option is not available for dashboards created in Group workspaces.

Another way to share content, is by creating an organizational content pack.

We can create an organizational content pack by clicking the gear icon on the top right hand corner of the page

We will then have an option to specify who has access to the content pack, its Title and Description (both are required) and also what dashboards, reports and datasets should be included in it

Then click Publish button at the bottom of the page to complete content pack creation process.

After the content pack has been published, users with required access will have an option to consume it by clicking on Get Data->Get

The Definitive Guide on Collaboration in Power BI Reference

With Connect live Power BI report published to PBI Services online using Enterprise gateway we get error when connecting to SSAS cube but SQL server database works

We created a PBI report with live connection to SSAS cube and it worked on desktop.

We created Organization (Enterprise) level gateway on the cloud app.powerbi.com.

We published the report. When we tried to view the report it gave the generic error: “Couldn’t load the model schema”.

We found out that our UPN was different from user-id (email). Our user-id was “my-user-id@abc.com”, whereas our UPN was “my-user-id@xyz.abc.com”.

So we created User mapping at the datasource in the gateway from “@abc.com” to “@xyz.abc.com”.

Viola!! The report started working on cloud going live against our on-premise SSAS server using on-premise gateway.

To create mapping navigate to Settings=> manage gateways => Data Source => Users and click on button “Map user names”.

Power BI Community Reference and Microsofts Guy In a Cube Solution Reference

 

So long AdventureWorks, and hello Wide World Importers!

Any future development on Microsoft sample databases should use Wide World Importers instead of AdventureWorks since it will not be updated or maintained after 2016.

The Wide World Importers can be downloaded from here.

Can I still use AdventureWorks with SQL Server 2016?

Yes, the final AdventureWorks build will be updated on the official download site here and you can still use AdventureWorks with your SQL Server 2016 instance.

Microsoft Reference

What is Slowly Changing Dimension

A data warehouse dimensional model comprises of facts and measures defined in the context of their dimensions.
Dimensions and their attributes, are relatively constant, but they do change over time.

The term Slowly Changing Dimension (SCD) is about tracking the variation in dimensional attributes over time.
The word slowly in this context might seem incorrect.
For example, a salesperson or store dimension, might change rapidly if a company reorganizes.
But in general, when compared to a measure in a fact table, changes to dimensional data occur slowly.

Within a data warehouse, you can handle slowly changing dimensions in several ways.

Type 0 – The passive method
In this method no special action is performed upon dimensional changes.
Some dimension data can remain the same as it was first time inserted, others may be overwritten.

Type 1 (overwrite):
A Type One change overwrites an existing dimensional attribute with new information.
In the customer name-change example, the new name overwrites the old name, and the value for the old version is lost.
A Type One change updates only the attribute, doesn’t insert new records, and affects no keys.
No history information is stored. Existing data is overridden by new values.

Before the change:

Customer_ID Customer_Name Customer_Type
1 Cust_1 Corporate

After the change:

Customer_ID Customer_Name Customer_Type
1 Cust_1 Retail

Type 2 (add a row):
A Type Two change writes a record with the new attribute information and preserves a record of the old dimensional data.
Type Two changes let us preserve historical data.
The history of data changes is preserved.
A new record is inserted each time a change is made.
Every data row has a valid from date and valid to date indicating the time period of the data’s validity, and each row usually has as isCurrent type of field that is set to Yes for the active record with the others set to No.
When a fact table record is inserted, it will be given the appropriate surrogate key of the dimension record.
Since Type Two changes add records, they can increase the database’s size.

Before the change:

Customer_ID Customer_Name Customer_Type Start_Date End_Date Current_Flag
1 Cust_1 Corporate 22-07-2010 31-12-9999 Y

After the change:

Customer_ID Customer_Name Customer_Type Start_Date End_Date Current_Flag
1 Cust_1 Corporate 22-07-2010 17-05-2012 N
2 Cust_1 Retail 18-05-2012 31-12-9999 Y

Type 3 (add a column):
This method traces changes using separate columns (but no new rows).
This means there is a limit to history preservation based on the number of columns in each row that are designated for storing historical data.
For example, a record may have the fields Territory1, Territory1EffectiveDate, Territory2, Territory2EffectiveDate, etc.
Type Three changes is implemented only if we have a limited need to preserve and accurately describe history, such as when someone gets married and you need to retain the previous name.
Instead of creating a new dimensional record to hold the attribute change, a Type Three change places a value for the change in the original dimensional record.
We can create multiple fields to hold distinct values for separate points in time.
In the case of a name change, you could create an OLD_NAME and NEW_NAME field and a NAME_CHANGE_EFF_DATE field to record when the change occurs.
This method preserves the change.
But how would we handle a second name change, or a third, and so on?
The side effects of this method are increased table size and, more important,
increased complexity of the queries that analyze historical values from these old fields.
After more than a couple of iterations, queries become impossibly complex, and ultimately you’re constrained by the maximum number of attributes allowed on a table.

Before the change:

Customer_ID Customer_Name Current_Type Previous_Type
1 Cust_1 Corporate Corporate

After the change:

Customer_ID Customer_Name Current_Type Previous_Type
1 Cust_1 Retail Corporate

Type 4 – Using historical table.
In this method a separate historical table is used to track all dimension’s attribute historical changes for each of the dimension.

Current table:

Customer_ID Customer_Name Customer_Type
1 Cust_1 Corporate

Historical table:

Customer_ID Customer_Name Customer_Type Start_Date End_Date
1 Cust_1 Retail 01-01-2010 21-07-2010
1 Cust_1 Oher 22-07-2010 17-05-2012
1 Cust_1 Corporate 18-05-2012 31-12-9999

Type 6 – Combine approaches of types 1,2,3 (1+2+3=6). In this type we have in dimension table such additional columns as:
current_type – for keeping current value of the attribute. All history records for given item of attribute have the same current value.
historical_type – for keeping historical value of the attribute. All history records for given item of attribute could have different values.
start_date – for keeping start date of ‘effective date’ of attribute’s history.
end_date – for keeping end date of ‘effective date’ of attribute’s history.
current_flag – for keeping information about the most recent record.

In this method to capture attribute change we add a new record as in type 2.
The current_type information is overwritten with the new one as in type 1.
We store the history in a historical_column as in type 3.
The ‘main’ dimension table keeps only the current data e.g. customer and customer_history tables.

Customer_ID Customer_Name Current_Type Historical_Type Start_Date End_Date Current_Flag
1 Cust_1 Corporate Retail 01-01-2010 21-07-2010 N
2 Cust_1 Corporate Other 22-07-2010 17-05-2012 N
3 Cust_1 Corporate Corporate 18-05-2012 31-12-9999 Y

References:

SQLMag Reference

Datawarehouse4u Reference

jamesserra blog reference

How to SSAS – SQL Server Analysis Services to generate memory dump files

In the config file msmdsrv located at “Program Files\Microsoft SQL Server\MSAS11.MULTIDIM\OLAP\Config”.

Make sure CreateAndSendCrashReports is set to at least 1.

To configure SSAS to generate a full dump file that includes the handle information, we can set the SQLDumperFlagsOn setting to 0x34 and the MiniDumpFlagsOn setting to 0x4.

For example, the Exception section in the Msmdsrv.ini file may resemble the following:

<Exception>
<CreateAndSendCrashReports>1</CreateAndSendCrashReports>
<CrashReportsFolder/>
<SQLDumperFlagsOn>0x34</SQLDumperFlagsOn>
<SQLDumperFlagsOff>0x0</SQLDumperFlagsOff>
<MiniDumpFlagsOn>0x4</MiniDumpFlagsOn>
<MiniDumpFlagsOff>0x0</MiniDumpFlagsOff>
<MinidumpErrorList>0xC1000000, 0xC1000001, 0xC1000016, 0xC11D0005, 0xC102003F</MinidumpErrorList>
<ExceptionHandlingMode>0</ExceptionHandlingMode>
<CriticalErrorHandling>1</CriticalErrorHandling>
</Exception>

Reference from support.microsoft.com