2 more Inventory Power BI Dashboards to Supercharge Your Dynamics GP Reporting

As I continue to build out my Dynamics GP Power BI dashboards, we look at product performance and inventory reorder analysis.

Product Performance

In the Product Performance dashboard, I’ve add ABC segmentation, moving averages and time comparisons using DAX measures. You can find additional information on the technics I’m using within my dashboards and reports at www.daxpatterns.com.

In the Product Performance report below, I used a scatter chart to analysis the Product classes profit margin and totals profits, the Sales ABC by ABC Class stacked bar chart segments the products using ABC segmentation and allows you to drill down into each segment to see the under-lining detail. Additionally, there is a total cost by warehouse location and weekly moving average cost comparison chart at the bottom of the report.

Power BI GP Product Performance

Inventory Reorder Analysis

For my Inventory Reorder Analysis report I’m using some the technics discussed by Belinda Allen in her YouTube video on Inventory Item Reorder Dashboard. I converted it to Power BI and added a little extra by also analyzing SOP product sales data and computing “Inventory Logic” based on whether the “Inventory Sold in Time Period” columns total is three times the current “Qty Available”

Power BI Inventory Reorder

Updated Template

You can download the updated template here. In the template, I also updated the Finance Summary report to use the New Matrix visual that was released in March. This adds the drill down functionality to the Profit and Loss Matrix Visual.

Future Dashboard developments

In the coming months, I plan additional Power BI reports associated with my Dynamics GP data. These reports will include:

  • Vendor Analysis
  • Field Service – Contract and Service Call Analysis

Stay tuned for the future developments of the reports and an overview of their functionality.

Advertisements

Power BI Visuals in On Premise SQL Server Reporting Services

Yesterday, January 17, 2017, Microsoft released the Power BI reports in SQL Server Reporting Services (SSRS) technical preview. You can read all about it here. I downloaded the technical preview and installed it on my system and deployed a Power BI Visual to my SSRS web site. Here are the steps to get it installed on your system.

· Download the SSRS Technical Preview and Power BI Designer for the preview

Power BI SSRS

· The Technical Preview has the following system requirements:

    • Your own VM or server (not one in production use)
    • Windows Server 2012 or later (or Windows 8 or later)
    • .NET Framework 4.5.2 or later
    • SQL Server Database Engine (2008 or later), to store the report server database
    • SQL Server Analysis Services (2012 SP1 CU4 or later), to store your data models

· I first installed the SQLServerReportingServices.exe on my system and accepted the licensing terms and agreement and then selected the “Install” button.

POwer BI SSRS

· After the installation completed I needed to configure the technical preview of SSRS by selecting the “Configure Report Server” button.

Power BI SSRS

· I provided my SQL Server name to Connect the Reporting Services Configuration Manager.

Power BI SSRS

**NOTE – While the configuration manager let, me configure the SSRS technical preview web site with my original VM’s computer name of ‘BARRYCROWEL38F0’ I could not publish by Power BI report with that computer name. I shortened the name to 7 characters and removed the numbers from the name to get the Power BI Designer Technical Preview to function with the SSRS web site.

· Design a Power BI visual with the Power BI Designer Technical Preview. In my visual, I connected to an SQL Server Analysis Services (SSAS) database. To publish you report to SSRS, select the ‘Save As’ menu option.

Power BI Technical Preview SSRS

· Following the screen prompts and supply the Reporting Server.

Power BI Technical Preview SSRS

· Then enter the report name.

Power BI technical preview SSRS

The final product is a Power BI visual with slicer functionality in an on premise SSRS web site.

**NOTE – the current SSRS technical preview does not work with custom visuals or R visuals.

Power BI On Premise

Conclusion

I’m loving the first on premise technical preview and can’t wait for what next.

Dynamic Security – a look at the final product

Introduction

Recently I posted a blog titled “How to implement Analysis Services Dynamic Security based on Microsoft CRM user access rights”. That post talked about how to implement the back-end structure needed to put it in place but didn’t show the result of all that hard work.

The dynamic row level security (RLS) that we setup inside Analysis Services will work within reports and dashboards built inside Reporting Services, Power BI and Excel. Power BI has other ways of deploying RLS when not using an Analysis Services connection. You can learn more about that here:

· http://sqlkover.com/dynamic-security-in-power-bi/

· http://radacad.com/row-level-security-configuration-in-power-bi-desktop

The End Result

We built are Dashboard in Power BI Designer and published the Dashboard to our end users. One of the end users “Tom” has access to 558 CRM Customer account within the US. When he accesses the Dashboard, his can view results that he has privileges to. (See Figure 1 below)

Power BI dashboard image 1

Figure 1 – Tom’s access view

If “Tom” tries to access an area that he doesn’t have privileges to, say Canada, the dashboard allows him to select the filter and changes the chart(s) results to show no data. (See Figure 2 below)

Power BI dashboard image 2

Figure 2 – Tom’s view with Canada selected

Additionally, the chart(s) change based on the end user that’s accessing the dashboard. In Figure 3 below, I show an administrative user accessing the dashboard and the result shown to them when selecting the “Canada” filter.

Power BI Dashboard image 3

Figure 3 – Administrative user view of Canadian Sales

Conclusion

I hope this helps you with your future Analysis Services development and Dashboard projects.

How to implement Analysis Services Dynamic Security based on Microsoft CRM user access rights

Introduction

In Analysis Services implementations, there is always a demand for implementing security. In this post, I’ll describe a dynamic or attribute security solution in Analysis Services that I built at a customer based on Microsoft CRM Salesperson customer access rights. The data level or row level access is based on CRM security access rights and a SQL script and SSIS package will deploy the user Active Directory account and Customer Number to a custom CRM table. By leveraging the CRM security already established we will reduce the administrative tasks needed for making additional security changes in Analysis Services whenever there is a CRM security change.

The Setup

Here are the pieces that we need to deploy to put the security in place:

1. Microsoft CRM table script

2. DataMart Acct_Access table script

3. DataMart User table script

4. Add Dynamics Security role to Cube and define MDX to filter Active Directory user access

5. Create SSIS package to populate and update Acct_Access and User tables in DataMart

1. Microsoft CRM table script

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

SET ANSI_PADDING ON

GO

CREATE TABLE [dbo].[Acct_Access](

[AccountNumber] [varchar](200) NULL,

[AccountID] [varchar](200) NULL,

[DomainUser] [varchar](200) NULL

) ON [PRIMARY]

GO

SET ANSI_PADDING ON

GO

2. DataMart Acct_Access Table Script

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE TABLE [dbo].[Acct_Access](

[Access_Key] [bigint] IDENTITY(1,1) NOT NULL,

[AccountNumber] [varchar](200) NULL,

[AccountID] [varchar](200) NULL,

[DomainUser] [varchar](200) NULL,

[Customer_Key] [bigint] NULL,

[User_Key] [bigint] NULL,

[Company_Key] [bigint] NULL,

CONSTRAINT [PK_Access] PRIMARY KEY CLUSTERED

(

[Access_Key] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]

) ON [PRIMARY]

3. DataMart User Table Script

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE TABLE [dbo].[User](

[User_Key] [bigint] IDENTITY(1,1) NOT NULL,

[DomainUser] [varchar](200) NULL,

CONSTRAINT [PK_User] PRIMARY KEY CLUSTERED

(

[User_Key] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]

) ON [PRIMARY]

GO

clip_image002

4. Add Dynamic Security role to Cube and define MDX script to filter Active Directory user access

Once this DataMart and CRM table structure is in place your next step is to bring it into your Analysis Services solution.  Here are the steps to follow inside BIDS/SSDT to leverage these tables

· Add the new tables (User and Acct_Access) to you Data Source View.

· Create a new dimension based off the User table.  Hide all the columns by changing AttributeHierarchyVisible to False.  You don’t want users to be able to view this dimension and process the dimension.

· Create a new Measure Group in the cube based off the Acct_Access table and name it Security.  Delete all measure it creates except for the automatically generated Count measure.  Select the single measure left to be hidden by selecting it and changing the Visible property to False.  This will hide the entire Measure Group from your users but you can still use it when writing MDX.

· Ensure these two have a Regular relationship setup in the Dimension Usage table of the Cube browser as seen in the screen shot below. 

clip_image004

· In the Solution Explorer, right-click on Roles and select New Role.  Name the Role Dynamic Security and give Read access in the General, Data Sources and Cubes tabs.  On the Membership tab add your CRM groups or user accounts to this Role.

· On the Dimension Data tab select the Customer dimension from the dropdown list.  Then select the Customer Number from the dropdown list.

clip_image006

· Inside the Dimension Data tab select the Advanced tab.  In the Allowed member set area enter this MDX:

EXISTS(

{[Customer].[Customer Number].Members},

STRTOSET(“[Customer].[Domain User].&[“+username()+”]”),

“Security”

)

clip_image008

5. Create SSIS package to populate and update Acct_Access and User tables in DataMart

With the table structure and the Cube solution complete, we need to create the SSIS package to populate and update the Acct_Access table in our CRM database and then bring the updated data into the DataMart. What we need:

a. First we create a SQL script task and add the script below to delete and repopulate the table with the current CRM customer access data. Change the yellow highlighted areas below for your environment needs.

execute as login = ‘Contoso\bcrowell’

delete Acct_Access

GO

declare @DomainName varchar(200)

–Make sure you execute the step to create the domain user table with the correct permissions.

execute as login = ‘Contoso\bcrowell’

if exists (select * from sysobjects where name = ‘DomainUser’)

drop table DomainUser

select DomainName into DomainUser from SystemUser

grant select on DomainUser to [Contoso\Domain Users]

declare SEL_CUR cursor for select DomainName from SystemUser where IsDisabled = 0

open SEL_CUR

fetch next from SEL_CUR into @DomainName

while @@FETCH_STATUS = 0

BEGIN

SELECT SUSER_NAME(), USER_NAME();

execute as login = @DomainName

insert into Acct_Access(AccountNumber, AccountID, DomainUser) select AccountNumber, AccountID, @DomainName from FilteredAccount

fetch next from SEL_CUR into @DomainName

REVERT

END

close SEL_CUR

deallocate SEL_CUR

drop table DomainUser

GO

b. Add additional Data Flow task to take CRM Acct_Access data and populate/update the DataMart’s Acct_Access and User tables.

Conclusion

I’m not the first to deploy some of the techniques used above and I started my learning by reviewing these two blog posts below and modified it to work with security rights already setup in Microsoft CRM.

· http://bidn.com/Blogs/analysis-services-dynamic-security

· http://bifuture.blogspot.com/2011/09/ssas-setup-dynamic-security-in-analysis.html

I hope this helps you with your future Analysis Services and Dashboard development projects.

How to Analyze and Report on your Data

Part 2 of 2 in Data Warehousing series

Every business from startup to larger established businesses needs to track details about their customers, products, sales, purchases, social media, website logs to name a few. By extracting, manipulating and analyzing this data you can determine key metrics to help you understand more about your customers and grow your business. In part 1 we talked about how to kick start your data warehousing with BI360 and now we need to know what to do with all this data.

Basically you can use your Data Warehouse for financial statement reporting and analysis, dashboards and data mining. I previously went through how to use BI360’s One Stop Reporting here  so I will be talking more about data mining and .

Data mining can be defined as the process of analyzing large quantities of data through automatic or semi-automatic tasks to extract previously unknown interesting patterns. Over the years, storage has grown at a faster rate than computing power. As a result, companies have stored enormous amounts of data and have become data-rich but knowledge poor. The main purpose of data mining is to increase the value of these large datasets by extracting knowledge from them and making the data useful to the business.[1]

In general, there are two types of data mining – predictive and descriptive. Predictive data mining will use variables or fields from a dataset to predict unknown or future values. The goal is to produce a model from a defined dataset that is used to perform classification, estimation, and other data mining tasks. On the other hand, descriptive data mining will focus on finding patterns that describe the data and can be interpreted by data analysts or business users. The goal is to gain understanding of the data by uncovering new relationships, patterns, and important information regarding the dataset.

Data mining can be valuable in virtually any industry and business scenario. The following are just a few examples of how data mining can improve decision making and give businesses a better return on investment with their Business Intelligence (BI) projects:

  • Recommendation Generation and Retail Floor Optimization:

What products/services should a company offer to its customers? By heat mapping customer traffic patterns it can help place offerings. Casino use this to help optimize the game theme selection, denomination selection, placement and removal

E-Commerce Websites – analyze purchasing behaviors of customer population and recommend products based on items in a customer’s cart

  • Anomaly Detection:

Analyze items that don’t fit a certain pattern. Credit card, insurance companies and business with POS systems use this method to detect fraud

  • Churn Analysis:

Which customers are most likely to switch to a competitor? When will they leave and can it be acted on before they do? Based on findings, companies can improve relationships and offer those customers discounts or loyalty incentives.

  • Market Penetration:

How many customers or sales per geographic location.

  • Risk Management:

Determine the risk of a loan or mortgage based on the customer profile and amount they are asking

Companies can make decisions based on cost and risk of the loan by using historical customer data

  • Customer Segmentation:

How well do companies know their customers? What is their Life Time Value (LTV)?

Determines behavioral and descriptive profiles for their customers in order to target marketing campaigns. The behavioral and descriptive profiles high LTV customers and be used and applied against first time customers’ behavioral and descriptive profiles to capture their loyalty.

  • Forecasting:

Estimate how much sales and/or inventory for each week, month, and quarter of a specific year

Data Mining Tasks

Determining the correct task and algorithm to apply to your dataset is a crucial step to achieving an accurate and useful data mining model. In some cases, it will be quite obvious which task will be the most accurate to use depending on the nature of your data. More often than not, you will need to explore and combine multiple tasks before arriving at a single solution. The following section describes the seven basic data mining and some additional resources and discussion on each:

Classification: Can be used to identify loan applicants as low, medium, or high credit risks based on attributes such as income, credit score, employment history, home ownership, and amount of debt. In this case the target would be credit score, and the other attributes would be the predictors.

Clustering: By using attributes such as Income and Age, three clusters could be created (could be more than 3 depending on data) – 1. Younger population with low income, 2. Middle Age with higher income, and 3. Older Customers with lower income.

Association: (also called Market Basket Analysis) Perhaps most famously, Amazon uses these types of analyses to suggest additional items you may be interested in purchasing, based upon other items frequently bought together.

http://www.casinoenterprisemanagement.com/articles/may-2009/turning-market-basket-analysis-action

http://blogs.office.com/2014/10/08/6-cool-ways-use-custom-maps/

Sequence Analysis: Can be used to analyze the sequence of web clicks on a website. The results are probabilities of the next click in the sequence, i.e. if a user clicks ‘News’, there is a 20% chance the next click will be ‘Sports’ and a 30% chance it will be ‘Weather’.

Sentiment Analysis

http://blogs.msdn.com/b/microsoft_business_intelligence1/archive/2012/09/19/the-social-media-summer-games-sentiment-tracking-in-power-view-it-s-a-slam-dunk.aspx

https://cwebbbi.wordpress.com/2014/10/19/sentiment-analysis-in-excel-with-azure-machine-learning-and-power-query/

Social Media Analysis

http://www.digital-warriors.com/social-media-analytics-power-business/

Deviation Analysis: Used to find the rare cases that do not match their behavior to the ‘norm’; most commonly used in fraud detection – finds the transactions that don’t match the spending habits of the customer.

http://www.journalofaccountancy.com/Issues/2003/Aug/TurnExcelIntoAFinancialSleuth

Regression: Can be used to predict a value of a house based on location, number of rooms, land size, and crime rates

Forecasting: Based on the sales from last year by month, how many cases of soda will the northeast store sell in January? The output would be the estimated number of cases of soda for January.

Next steps

For a quick how to on using SSAS Data Mining check out my previous post here.


[1] http://en.wikipedia.org/wiki/Data_mining

Using SSAS (SQL Server Analysis Services) Data Mining to Automate Marketing Analysis.

Part 2 of a 2 part series

Any good analysis depends on creating a customer profile with good data. For data to be good it will need to be cleansed, meaning when you look up State in your customer database table that MD is always the entry for Maryland. The collecting and cleansing of the data is the hard part or where most of the work comes into play. The information you need to collect depends on your type of business. For example, if you sell to individual consumers, you will want to know their age, gender, location, spending habits and income. If you sell to other businesses, find out what sector they are in, how big they are, how much they spend and what other suppliers they use. Once all the hard work of data collecting and cleansing is done it’s time to analyze the data.

Creating a Data Mining Project

For this post I will be using Visual Studio to create my Data Mining Project which will help me build a DMX query when providing a front end interface for my data end users. You can get some to the same results with Excel’s SQL Data Mining add-in located here: http://www.microsoft.com/en-us/download/details.aspx?id=35578

1. Open SQL Server Data Tools (SSDT).

2. On the File menu, point to New, and then select Project.

3. Verify that Business Intelligence Projects is selected in the Project types pane.

4. In the Templates pane, select Analysis Services Multidimensional and Data Mining Project. clip_image002

5. In the Name box, name the new project BasicDataMining.

6. Click OK.

7. Next I create a data source and data view associated with the data source.

8. Next we create a mining structure for the targeted mailing scenario and two data mining models. Each model type is based on a different algorithm and provides different insights into the data.

· The Decision Tree model tells you about factors that influence bike buying.

· The Clustering model groups your customers by attributes that include their bike buying behavior and other selected attributes.

Once the data mining models have been built and deployed, we can use the Lift Chart function in Visual Studio with Biker Buyer set to yes to see what algorithm, Decision Tree or Clustering provides the best prediction that someone will be a bike buyer. On the chart below, the top green line represents the ideal model, 100% of your customers buying a bike and the bottom blue line represents a random guess. So the closer to the ideal model the better your model is at predicting a bike buyer. In this case it is the Decision Tree model.

clip_image004

Creating Predictions

Using the Decision Tree model together with our prospective buyer table we can build a marketing list for our sales team to focus their follow up calls on.

1. On the Mining Model Prediction tab of Data Mining Designer, in the Mining Model box, click Select Model.

2. In the Select Mining Model dialog box, navigate through the tree to the Targeted Mailing structure, expand the structure, select Decision Tree, and then click OK.

3. In the Select Input Table(s) box, click Select Case Table. clip_image006

4. In the Select Table dialog box, in the Data Source list, select the data source view Adventure Works DW.

5. In Table/View Name, select the ProspectiveBuyer (dbo) table, and then click OK.

clip_image008

6. Right-click the lines connecting the Mining Model window to the Select Input Table window, and select Modify Connections. Notice that not every column is mapped.

7. From the ProspectiveBuyer table drag and drop the email, address1, city fields to the source grid.

8. From the Mining Model drag and drop the Bike Buyer field to the source grid.

9. Select the Prediction Function under Source and PredictProbability under Field and in the Criteria/Argument field drag and drop the Bike Buyer field from the Mining Model window. Complete the argument by adding the “,1” to predict the bike buyer. The results will look like the below screen shot.

clip_image010

Execute the Predictive query clicking on the highlighted button in the below screen shot.

clip_image012

The query will product the below results that can be provided to your sales team.

clip_image014

By using Visual Studio to build my Mining Model Prediction it helped me create a DMX query that I can then use with Excel or SSRS to provide front end access for my end users. Giving the end user a way to access the data for a focused marketing campaign or targeted potential buyers list.

Using SSAS (SQL Server Analysis Services) Data Mining to Automate Marketing Analysis.

Part 1 of a 2 part series

Back in a former life when I worked in the casino industry, we used Excel spreadsheets to segment our customer loyalty club members. The segmentation was very manual and required many hours to complete to build a quarterly direct mail campaign to each segment with the appropriate offers for each segment so we would have a successful and profitable campaign.

The reason we did this was clear: when each individual customer (or small groups of very similar customers) receives a personalized and highly-relevant offer, there is much more chance that the message will resonate with them. In a world of marketing overload, customers have little patience to hear what you’re telling them. Thus, it’s critically important to tailor your messages and incentives so that they are the most relevant and interesting for each individual customer.

The manual segmentation process worked a little like this.

Step 1: Segment your Customers into cluster groups

The first step is to divide your customers into distinct groups based on their RFM (recency, frequency, monetary). We would also add demographic data such as age, zip, gender to the RFM data.

Our goal of RFM was to market to each cluster, customer group, in a way that would maximize their LTV (Life Time Value). Here are some examples to think about:

  • Your best cluster consists of those customers who have been on the site recently (R), engage in many transactions (F) and spend a lot (M). You want to give these customers VIP treatment to encourage them to keep coming back.
  • For customers who have spent a lot despite making few purchases (high M/F), you want to send them offers that encourage them to come back to the site more frequently.
  • For customers who spent an above-average amount (high M), but haven’t been back to the site for a while (medium R), you want to give them aggressive offers to bring them back.

Step 2: Segment your New Customers into Actionable Sub-Segments

The best way to encourage new customers to become long-term customers is to give them a good experience. Beyond good on-site and customer service experiences, this is a good time to give them extra benefits or bonuses to make them feel welcome and appreciated. Periodically we would perform cluster analysis on current high LTV loyalty club members to create profiles of how they acted when they first became members. With this data we would base new member offers on their similar behavior patterns with current high LTV loyalty club members.

Going deeper, you should segment the new group into two important sub-groups which will receive extra attention:

  • One-time-only customers – these are customers who never returned after their first purchase (or payment, deposit, trade, etc.). Your goal is to bring them back again. Typically, you want to send them particularly attractive offers to encourage them to return in the short term.
  • New customer with high potential – these are the customers with the highest potential long-term value. You can identify this group using predictive forecasting, which uses customer modeling technology to incorporate both behavioral and demographic data to predict how a customer will likely behave in the future based on similar customer data from current loyalty club members.

Using SSAS Data Mining to automate segmentation

We can automate the above manual clustering process using SSAS data mining tools. With my dataset of both demographic and RFM data I could build a clustering data mining project in Visual Studio. The SSAS Clustering algorithm groups cases from a dataset into clusters containing similar characteristics. Using these clusters, you can explore the data and learn about relationships among your cases. Additionally, you can create predictions from the clustering model created by the algorithm.

In my next post, we will create a Basic Data Mining project using Microsoft Dynamics GP data. I will document the steps needed to complete a scenario for a targeted marketing campaign in which you create models for analyzing and predicting customer purchasing behavior and for targeting potential buyers.