2 years of silence

Since I’ve moved from consultancy to aviation business there has not been any activity on this blog. Maybe there will in the future but at the moment you can find my public presentations about the data-driven world in the airport ecosystem:

https://www.slideshare.net/Solita_Oy/finavia-agile-cloudbased-dw-development

https://www.tdwi-konferenz.de/tdwi2018/startseite-englisch/program/conference-program/speakerdetails/saction/detail/sspeaker/tommi-vihervaara.html

https://docplayer.fi/63951620-Tommi-vihervaara-building-the-data-driven-culture-in-aviation-business.html

 

BR,

Tommi


Birst security filtering

birst

Quite a while since the last blog post but here we go again! As I’ve been involved with Birst lately it’s time to blog some under-the-hood stuff about Birst respectively.

I want to share the experience that is mentioned there and there around the Birst community articles. It’s about securing dimension attributes and how they affect the related fact measures depending on how the data sources are configured.

In my example I have the following fact, dimension and “access stable” data sets:

birst_fact

birst_dimension

birst_accesstable

I want to secure the fact data depending on which business unit (BU) the user belongs to. As we can see the test user should be able to see only the HR fact data.

By using the basic security filtering feature via birst it can be accomplished:

birst_security_ds

birst_sales_nodimension_full

Then when using Visualizer under the test users’ account we can see that the filtering works OK. Or does it?

Depends on what you consider as the correct output. By default, Birst is applying the security filter ONLY when the corresponding dimension is used in the visualization:

birst_sales_by_bu

As we can see if we only choose the sales measure and no dimension attributes it shows the total amount of sales and doesn’t apply the security filter behind the dimension attribute (as it’s not being explicitly used).

At some times we may want to filter the underlying fact measure data “automatically” without explicitly adding the dimension attribute to the visualization. IMO I find this problematic as then you violate the “one set of numbers”-rule as every user could see different total amount depending on how their data is filtered via security filters.

Anyhow, if this is acknowledged and there’s no problem with the output the way to do it in Birst is to check the filtered attribute field as a Measure (not so “logical” I think but it works :)). Then we’ll end up having filtered total amount without using the dimension attribute:

birst_sales_nodimension


TM1 Maximum overdrive! Running TI processes in parallel with multiple cores

The Problem

Since the release of IBM Cognos TM1 10.2 it has been possible to leverage multi-core CPUs into 100% action as the MTQ (Multi-Threaded Query) allows to spread queries along multiple cores. Unfortunately this applies only to queries (as figure 2 here illustrates), therefore the TI processes are still ran in single thread and can only use one core at a time. This quite annoying because if you have for ex. 64 cores you end up running CPU-intensive TI processes only with one core! With some serious magic this limitation can be of course turned around with little help of tm1runTI.exe tool. With it’s help we can start multiple TI processes at the same time via command prompt and achieve the consumption of more than one core at the same time!

The Solution

This trick consists of three separate “modules”

  • tm1runti_execute_parametrized.bat – Parametrized batch file that wraps the execution of tm1runTI.exe
  • Maintenance.Execute.Tm1RunTI.Parametrized – Universal parametrized wrapper TI process that wraps the calling of the batch file above. Can be used to run any TI process in parallel execution mode.
  • A parametrized TI process that can safely be ran simultaenously to avoid locking. For example a version copy process parametrized by year/month etc. See details in the “Usage example” section.
The parametrized batch file code

The parametrized batch file code

The batch file itself is very straightforward: it needs authentication information and the name of the process that is going to be executed in parallel.

The parametrized wrapper TI process (Maintenance.Execute.Tm1RunTI.Parametrized) also takes parameters: TI process that is going to be executed in parallel and it’s parameter names and corresponding values:

Parametrized TI process parameters

Parametrized TI process parameters

The actual code is also nice and simple. It just checks whether the parameter values are given and generates a command prompt batch file call and the executes it in the command prompt:

The parametrized TI process code

The parametrized TI process code

Usage example

Now that the core modules are introduced it’s time for and usage example. Lets say you have multiple big cubes and you want to make copies of versions in each of the cube periodically. With big cubes and big data amounts that could take long as TM1 only uses 1 core for the TI process execution. But here we can now make a big difference: we can divide the copying of the versions by year and do the copying for each cube at the same time (the only limitation here is the amount of cores available).

Here is the example code:

Sample usage code

Sample usage code

What we do here is that in the outer while loop (has been left out of the picture) we iterate trough all the cubes that we want to process with the version copying. Then in the inner while loop we go through years between 2006 and 2018. Depending on the amount of cubes we could end up with for example +100 separate TI process calls. As they are separated from each other by the time dimension they dont lock each other and can be ran at the same time (this is the most important part: one must design the runnable TI process in way that when it’s executed in separate threads the executions dont lock up each other!).

So, when we bring this piece of code into action we sure can take everything out of the expensive PVU’s as we can see here:

Task manager seeing red

Task manager seeing red

TM1Top: TM1 having busy time processing

TM1Top: TM1 having busy time processing

Summary

The 10.2.2 introduction of MTQ was a relief and it’s a feature that should have been there 10 years ago when multi-core CPU’s arrived. Unfortunately TM1 still lacks the out-of-the-box feature to execute TI processes in parallel but with this little trick we can overcome this limitation and reveal the full power of the CPU in our TM1 solution!


TM1: Application Maintenance utility and CAM security. The INVALID_IDENTITY challenge.

In the newest TM1 versions it’s possible to refresh the TM1 Applications user permissions and do some other maintenance work via command line. The bad thing is that is still has some issues when working with CAM security (which happens to be the case almost in 100% of the production environments). One may encounter some INVALID_IDENTITY – related error messages like this:

com.ibm.cognos.fpmsvc.exception.FPMSVCException: INVALID_IDENTITY
exitCode=700

IBM’s official answer says that one must use SDK and develop some custom code to get a grip on a cookie value that is set by the IBM Cognos BI portal. Luckily fellow TM1 gurus found also an alternative way of fetching the cookie “cam_passport” value but unfortunately the value could keep changing multiple times a day. Now if you have automated the maintenance utility it may keep going for a while but when the cookie value expires the utility stops working. And that’s bad.

So we need a way to automatically fetch the “cam_passport” cookie value when executing the application maintenance utility (app_maintenance.bat). Once again PowerShell comes to rescue! Here’s what you should do inside a PowerShell script:


$url = "https:///ibmcognos/cgi-bin/cognosisapi.dll?b_action=xts.run&m=portal/cc.xts&gohome="
$cookiejar = New-Object System.Net.CookieContainer
$webrequest = [System.Net.HTTPWebRequest]::Create($url);
$webrequest.CookieContainer = $cookiejar
$credut = New-Object System.Net.NetworkCredential;
$credut.UserName = "username";
$credut.Password = "password";
$credut.Domain = "domain.com"
$webrequest.Credentials = $credut
$response = $webrequest.GetResponse()
$cookies = $cookiejar.GetCookies($url)

$cookies["cam_passport"].Value | Out-File "d:\temp\cam_passport.txt"

Basically what it does is make a http request to the IBM Cognos portal using the credentials provided (these must be the credentials of a TM1 admin user that has priviledges to run the app_maintenance.bat). On the last line it writes the “cam_passport” cookie value into a temporary file.

Save the script into a file for eg. “Update_CAM_passport.ps1” and run it via cmd command line by:

powershell.exe -Noninteractive -Noprofile -Command 
"D:\TM1\scripts\Update_CAM_passport.ps1"

Now, to make sure that the cam_passport is valid through all times you can schedule the command above to Windows Scheduler and put it running say every 30 minutes.

The final step is to call the app_maintenance.bat by giving it the recently fetched cam_passport value as parameter and we can achieve it as follows (a batch file contents):

@Echo on 
timeout 2
set /p p_cam_password=<d:\temp\cam_passport.txt
"D:\Program Files\ibm\cognos\tm1_64\webapps/pmpsvc/WEB-INF/tools/app_maintenance.bat" 
-op refreshrights  -serviceurl https://tm1hostname.com:9514 
-credentials CAM:%p_cam_password% -app <application_id>

Where the application_id is the unique id of your application.

Se on siinä! As we would say in finnish. That’s about it. Now you can sleep safely knowing that your automated application maintenance utility keeps running even if the cam_passport value changes.
 


TM1: Run SQL Server Job from TI process

TI_sql_job_image

TM1 Turbo Integrator is quite limited when it comes to loading data from different sources and transforming and cleaning the data before it can be loaded to cubes. Therefore you might have a dedicated data repository for your TM1 solution for example a data warehouse. Sometimes you need to refresh the DW data before it can be loaded to your TM1 solution. Then you face the question: how to manually execute data warehouse refresh ETL jobs? Of course you can always use SQL Server Management Studio and run the jobs there but if there is a need for the end users to manually refresh the data during working hours it would be nice to provide them a single-platform solution do accomplish this.

So lets face it: maybe you already have some user interface for the end-users to execute some TM1 solution’s internal TI-processes. It could be via TM1 Perspectives web sheets on TM1Web,  straight from TM1 Architect etc. Wouldn’t it be nice to have the ability to run external DW loading processes from the same place? Here is how to accomplish it.

Set up the logins and jobs in SQL Server

TI_sql_job_image_login_properties

First of all you need to ensure that the user account that will be used when from TI process connects SQL Server has all the needed permissions in the SQL Server instance. At least the three following things must be ensured:

  • Login has SQLAgentUserRole checked in the msdb database. Then the login will have permissions on only local jobs and job schedules that it owns
  • Login has the required permissions to access the source files etc. That can be solved by defining a proxy account and I have described it in my previous blog post.
  • The job must be owned by the same user you will use when connecting from TM1. This is because we don’t want to throw admin permissions around and we want the user to have only rights to one or couple of jobs not all of them

 

Set up TI process to run the job: zSQLJob_StartJob

To accomplish we need two different TI processes: one to actually execute the job and one to determine whether the job execution has ended or not. Let’s start with the main process zSQLJob_StartJob. It has one parameter p_sql_job_name which will contain the name of the job that will be executed. The process itself is quite straightforward and here is the complete code:

TI_sql_job_image_zsqljob_start

What it does is basically just opens the ODBC connection to the msdb database of the SQL Server instance and then executes the job. The magic here is that the process won’t finish until the job on SQL Server has finished. To have this functionality we need to poll the msdb database every 5 seconds and find out if the job has finished or not.

Set up TI process to determine the status of the running job: zSQLJob_CheckJobStatus

To find out whether the job has finished or not we just query the job status directly from the msdb database. To do that we use the following SQL query and put it in the “Data Source” tab like this:


SELECT CASE
WHEN EXISTS (
select job.Name, job.job_ID, job.Originating_Server,
activity.run_requested_Date, datediff(minute, activity.run_requested_Date, getdate()) as Elapsed
from msdb.dbo.sysjobs_view job
inner join msdb.dbo.sysjobactivity activity
on (job.job_id = activity.job_id)
where run_Requested_date is not null and stop_execution_date is null
and job.name = 'JOHIDW_Update_Publishing' ) THEN 1 ELSE 0 END AS JOB_RUNNING_STATE

TI_sql_job_image_zsqljob_checkstatus_datasource

The only magic with this process is to save the single result value to the global variable vnJOBSTATE_global that we defined in the previous master process. If the value is 1 it means that the job is still running and the master process keeps polling the msdb database every 5 seconds. When the job finishes the value turns to 0 and the master process can finish as well.

Conclusion

There may be times when you need to execute SQL Server jobs to make sure your data to be loaded to TM1 is up-to-date. With this example you can do a nice and easy integration between TM1 and SQL Server and allow end-users to execute simple SQL Server jobs if needed.


Using Prompt API for IBM Cognos BI Report Studio

Prompt API for IBM Cognos BI

Prompt API for IBM Cognos BI

Default values for report prompts? It doesn’t sound very difficult does it? Well as far as the default values are static you can define them easily with your report properties but in the real business world there’s no such thing as static. Change is just around the corner and so we need to have more dynamic reporting solutions. Making dynamic default values for report prompts is something that should be a out-of-the-box-feature (as it is in Microsoft SQL Server Reporting Services for example) but with IBM Cognos BI Report Studio it has been about making some JavaScript/HTML/DOM hacks. Now they’ve released Prompt API for IBM Cognos BI to tackle down this issue. Well, it’s still Javascript but at least there is a documented API how to use it. Let’s give it a shot.

In my simple example I’ve created a report with two prompts and one HTML Item. The first “kkprompt”-prompt fetches the default value that is going to be selected in the another prompt. You can later on hide the first prompt box because it doesn’t need any user interaction. Now, to get the “defaultselprompt” prompt working with dynamic default value we need to add the following code inside the HTML Item.

prompt_api_code

And that’s pretty much it! I’m still amazed that this kind of feature is behind some javascript code and not available in the prompt box properties but hey, maybe in the next major release?

-GD


SQL Server 2012 Integration Services DLL Hell: Microsoft Visual Studio Tools for Applications Runtime 3.0

One of the major drawbacks of the SQL Server Integration Services (SSIS) is the mixture of 32bit and 64bit environments:

  • Design & development time environment (SQL Server Data Tools) is running in 32bit though you can debug the package in 64bit mode
  • Production packages can be run in 32bit or 64bit whatever you choose in the package properties

Well, there are moments when this arrangement feels like a gift from god but for me it was once again a real pain in the ass. I was recently working in a fresh installation of SQL Server 2012 and Data Tools. When I was trying to edit a Script Task in my SSIS package I faced the following error message:

Could not load file or assembly ‘Microsoft.VisualStudio.Tools.Applications.Core

Fresh installation and everything seems like to be in order but no, the script editor won’t open up. After investigation I found out that when the moon and the stars are aligned perfectly there is a small possibility that you miss one assembly file if the installation of SQL Server and related components is done in the wrong order. The thing is that to be able to run the package in both 32bit and 64bit there has to be same assemblies respectively: ones for 32bit and the others for 64bit. My problem was that I was missing the 32bit version of Microsoft.VisualStudio.Tools.Applications.Core.

From the “Add / Remove Applications” of Windows I noticed that yes, I have the Microsoft Visual Studio Tools for Applications x64 Runtime 3.0 but what i was missing was the x32 version of the assembly. So grabbing the MSI installer file from the installation media (redist/VSTA/runtime/x86/VSTA_RT30.msi) and installing it fixed the problem.

Beware of DLL hell!

 


Scheduled backup solution for TM1 with 7zbackup

7zip_logo

Working with in-memory applications like TM1 it’s essential to have a working backup strategy. TM1 doesn’t provide any built-in functionality for this but with little help from 3rd party tools one can easily build up a lightweight and working backup solution. What you need is 7zbackup, 7-Zip 9.20 and Windows PowerShell 2.0.

Getting started, step 1: Modify the “selection file”

(Notice: I personally needed to go back to version 7zbackup 1.8.5 stable to get the script running succesfully.)

After installing the required applications start with modifying the 7zbackup “selection” config file. That’s the file that contains the configuration needed to run through the backup process. There are many different parameters that you can modify but let’s just go through the most important ones.

includesource=<path_to_your_tm1_datafiles_folder>\datafiles|alias=PROD

Here you define what to back up. Datafiles folder is the heart of TM1 so you need to back it up entirely. If you want you can also include the logfiles folder but its not necessary.

rotate=60

As it says in the comment line This variable holds the number of historical archive backups you want to keep on target media. I tend to play safe and there’s disk space available nowadays so two months of rotation should be enough. Just remember to copy the actual backup zips to remote location periodically so you got also a real disaster recovery possibility.

There are also other parameters you may like to config but this will do it for us now.

Step 2: Set up a batch file to run the PowerShell script

Next we set up a classic bat-file to make it possible to execute the PowerShell script from windows scheduler. Here is the contents of the bat-file:


@ECHO off
for /f "tokens=1" %%a in ('powershell.exe ""') do set WD=%%a
powershell.exe -Noninteractive -Noprofile -Command "&{<path_to_your_7zbackup_folder>\7zBackup.ps1 --type full --selection <full_path_to_your_selection_file> --destpath <path_to_your_backup_destination_folder> --logfile <path_to_your_7zbackup_logging_folder>\daily_backup_%WD%.txt}"

Basically what it does is that it runs the 7zBackup.ps1 script file telling it to take a full backup, use the settings from the selection file, put the zipped backup in the given destination folder and put the 7zBackup own logging files in given directory and give the logfile a timestamped name.

If you haven’t configured the script execution policy you may encounter this error message:

File 7zBackup.ps1 cannot be loaded because the execution of scripts is disabled on this system. Please see "get-help about_signing" for more details.

That means you need to set the execution policy to allow executing custom scripts. More information here.

Step 3: Schedule the backup process with Windows Scheduler

The last thing to do is to schedule the backup process on daily basis (or more often if you like). There are 5 tasks to set up there:

  1. Run SaveDataAll() on the TM1 server to make sure that all the data is written to disk before the backup starts
  2. Shutdown the TM1 server: NET STOP <your_tm1_server_service_name>
  3. Execute the backup bat file (created in step 2)
  4. Start the TM1 server again: NET START <your_tm1_server_service_name>
  5. (OPTIONAL): Copy the newly created backup zip file from destination folder to your remote backup storage location.

Here we are! Happy backuping!

-GD


Configuring SharePoint 2013 with Business Intelligence features

SharePoint-2013

Installing SharePoint 2013 and SQL Server 2012 are quite straightforward processes. When it comes to configuring them and enabling all the nice Business Intelligence features from scratch is a different thing. I recently took a ride on that Via Dolorosa and completed the whole process once again. Here is what you should remember when going through it yourself.

Chapter I: Prerequisites

Before the whole installation process can begin one must install bunch of SharePoint 2013 installation prerequisites. There is a complete list of prerequisites in TechNet Blog Article so I don’t go through them detailed here. What you should do is download all of them and place them in one directory eg. E:\sp2013_pre and run the following command:


D:\prerequisiteinstaller.exe /SQLNCLi:E:\sp2013_pre\sqlncli.msi /PowerShell:E:\sp2013_pre\Windows6.1-KB2506143-x64.msu /NETFX:E:\sp2013_pre\dotNetFx45_Full_x86_x64.exe /IDFX:E:\sp2013_pre\Windows6.1-KB974405-x64.msu /Sync:E:\sp2013_pre\Synchronization.msi /AppFabric:E:\sp2013_pre\WindowsServerAppFabricSetup_x64.exe /IDFX11:E:\sp2013_pre\MicrosoftIdentityExtensions-64.msi /MSIPCClient:E:\sp2013_pre\setup_msipc_x64.msi /WCFDataServices:E:\sp2013_pre\WcfDataServices.exe /KB2671763:E:\sp2013_pre\AppFabric1.1-RTM-KB2671763-x64-ENU.exe

Pretty much all of the prerequisites can be installed manually but the AppFabric installation and configuration is error-prone so it’s better to leave it (and all of the prerequisites) to the built-in prerequisite installer.

NOTE! The built-in prerequisite installer wants SQL Server 2008 R2 Native client by default. I was using the latest SQL Server 2012 so I tricked the installer and replaced the sqlncli.exe with the SQL Server 2012 Native Client installer (same filename different driver).

Chapter II: Installation and SharePoint Product Configration Wizard

Before installing make sure that you have some domain accounts available for various service accounts used by SharePoint.  Then its pretty much “Next-next-next” : just run through the installer and when it comes to the end of installation please follow the instructions and start SharePoint Product Configuration Wizard.

SharePoint_config_wizard_ (2)SharePoint_config_wizard_ (3)SharePoint_config_wizard_ (4)First we create a new server farm and then specify the database settings for the configuration database and a domain username to access the database followed by giving a secure farm passphrase.

SharePoint_config_wizard_ (5)SharePoint_config_wizard_ (6)SharePoint_config_wizard_ (7)The last thing to do is choose the port for Sharepoint Central Administration Web Application. If you need to choose a specific one now it’s time to do that. For security we go for NTLM.

Chapter III: Configuring the farm in SharePoint Central Administration Web Application

After the Configuration Wizard finishes it’s time to use SharePoint Central Administration Web Application.

First we start the Farm Configuration Wizard and create a dedicated domain user to be used as the Service Account.

Then it’s time to choose which service applications you want to run in your SharePoint Farm. Make sure that you check at least the following services:

  • Excel Services Application
  • PerformancePoint Service Application
  • Secure Store Service

SharePoint_Farm_Config_wizard_ (1) SharePoint_Farm_Config_wizard_ (2) SharePoint_Farm_Config_wizard_ (3)

As it states the actual configuration phase “shouldn’t take long” but when installing to a virtual machine that may not have the appropriate resources this could take long.

SharePoint_Farm_Config_wizard_ (5)SharePoint_Farm_Config_wizard_ (4)

Chapter IV: Create new (Business Intelligence) site collection

After the Farm Configuration Wizard finishes it’s time to create our first site collection. In the create site collection wizard you must provide the name for the site, URL and the template that you’d like to use with your site. As we are creating Business Intelligence solutions we choose “Business Intelligence Center” (under Enterprise tab).

CreateSiteCollection

Now we have our site in place and next thing to do is install the SQL Server PowerPivot for Sharepoint instance.

Chapter V: Install SQL Server PowerPivot for SharePoint instance

For storing user created Excel PowerPivot documents we need to have a separate SQL Server Analysis Services instance installed in PowerPivot for SharePoint mode. Previously this needed to be installed inside the SharePoint farm but with SharePoint 2013 it doesn’t matter where it is installed.

SQL_PowerPivot_SharePoint_ (1) SQL_PowerPivot_SharePoint_ (2)

Just run the SQL Server installer application, choose “Perform a new installation of SQL Server 2012”  and in the “Setup Role” page choose “SQL Server PowerPivot for SharePoint“.

SQL_PowerPivot_SharePoint_ (3) SQL_PowerPivot_SharePoint_ (4) SQL_PowerPivot_SharePoint_ (5)

Rest of the installation is straightforward. Just remember to create and use a specified user account for the Analysis Services service.

SQL_PowerPivot_SharePoint_ (6) SQL_PowerPivot_SharePoint_ (7) SQL_PowerPivot_SharePoint_ (8)

Chapter VI: Manage Excel Services data model settings

Next thing to do is to configure Excel Services settings. Just add the name of the previously installed SQL Server Anaylsis Services (PowerPivot for SharePoint) in the Manage Service Application section. Follow the detailed instructions found from

http://technet.microsoft.com/en-us/library/jj219780.aspx

Chapter VII: Enable Claims to Windows Token Service

If you now try creating a Excel spreadsheet and upload it to SharePoint document collection you may encounter this error message: External Data Refresh Failed.

ExternalDataRefresh

That is because we haven’t configured the Claims to Windows Token service yet. To achieve that follow the instructions found here:

http://technet.microsoft.com/en-us/library/ff487975.aspx

claimstotoken

Chapter VII: Configure EffectiveUserName with Excel Services

We are almost there. Next thing to do is to enable the usage of EffectiveUserName with Excel Services. Briefly the steps to do are:

  • Enable EffectiveUserName property check box in Manage Service Applications
  • Add the Excel Services application user account into the administrators of the SQL Server Analysis Services instance (via SQL Server Management Studio and Analysis Server properties)
  • Now the data refresh should work properly!

For detailed instructions see the following TechNet article: http://technet.microsoft.com/en-us/library/jj938164.aspx

Chapter VIII: Deploy PowerPivot for SharePoint 2013 add-in

To enable server-side data refresh processing, collaboration, and management support for PowerPivot workbooks go and download & install the Microsoft® SQL Server® 2012 SP1 PowerPivot® for Microsoft® SharePoint from here.

Chapter IX: Configure PowerPivot and Deploy Solutions

To get the PowerPivot Gallery, Schedule data refresh, Management Dashboard, and data providers features working properly next we must run the PowerPivot for SharePoint 2013 Configuration tool.

Follow the detailed instructions here: Configure PowerPivot and Deploy Solutions (SQL Server 2012 SP1)

PowerPivot_for_sp2013_config (2) PowerPivot_for_sp2013_config (3) PowerPivot_for_sp2013_config (4)

There may be warnings about Secure Store not configured but we don’t need to care about it at this phase.

PowerPivot_for_sp2013_config (5) PowerPivot_for_sp2013_config (6)

All set! Finally make sure you add the PowerPivot Service Application user to the administrators of the Analysis Services instances.

PowerPivot_for_sp2013_config (1)

Chapter X: Add a BI Semantic Model Connection Content Type to a Library (PowerPivot for SharePoint)

Adding the BI Semantic Model Connection Content Type to a libary allows the quick usage of eg. Power View straight from the SharePoint site. Detailed instructions how to add it can be found here.

To avoid future connection problems wise thing to do at this phase is to make sure that all the corresponding SharePoint Service Application user accounts are added as administrators to all the Analysis Services instances that’ll be used with SharePoint.

AdminaccountsSSAS

Chapter XI: Install and configure Reporting Services SharePoint Mode for SharePoint 2013

In today’s world there is a big hassle about Self Service BI and all the nice-looking analyzing tools. Microsoft has them all but there might still be a need for “reporting for the masses” in other words if you need standardized reports that are delivered to plenty of people then Reporting Services comes handy.  Next we go through the steps of installing and configuring Reporting Services SharePoint Mode for SharePoint 2013. For detailed instructions, please find them here:

Install Reporting Services SharePoint Mode for SharePoint 2013

SSRS_for_SP2013_installconfigure_ (1) SSRS_for_SP2013_installconfigure_ (2) SSRS_for_SP2013_installconfigure_ (3)

First we run the SQL Server 2012 installer once again and choose to “Add features to an existing instance of  SQL Server 2012“. In the Feature Selection page you must choose the following:

  • Reporting Services – SharePoint
  • Reporting Services Add-in for SharePoint products

In Reporting Services Configuration page choose Reporting Services in SharePoint Integrated Mode and Install only.

SSRS_for_SP2013_installconfigure_ (4) SSRS_for_SP2013_installconfigure_ (5)

All set, hit install and after a moment the installation is ready.

SSRS_for_SP2013_installconfigure_ (6) SSRS_for_SP2013_installconfigure_ (7) SSRS_for_SP2013_installconfigure_ (8)

Then comes the configuration part. First we create the Reporting Services Service Application in Manage Service Applications section in the Central Administration. You must provide a unique name for the service application and then create a new application pool and a specific user account for running the pool. Also choose the database server where to create the SSRS datastore. For authentication we go with Windows Authentication.

SSRS_for_SP2013_installconfigure_ (9) SSRS_for_SP2013_installconfigure_ (10)

That’s pretty much it and again we wait for SharePoint to get the configuration done.

SSRS_for_SP2013_installconfigure_ (11)

Next we Activate the Power View Site Collection Feature in the Site Collection Administration. Now that we want to leverage the BISM Data models with Power View make sure that the Power View Integration Feature is also activated.

The last thing to do with Reporting Services is to Add Report Server Content Types to a Library (Reporting Services in SharePoint Integrated Mode).

Chapter XII: Configure PerformancePoint Services

To enable visually nice-looking dashboarding capabilities to SharePoint we must enable and configure PerformancePoint Services.

Configure the unattended service account for PerformancePoint Services

You might hit the error message “The data source cannot be used because PerformancePoint Services is not configured correctly. Additional details have been logged for your administrator” If the PerformancePoint Services Unattended Service Account is not set you must enable it. The setting is located in “Manage service applications” in SharePoint Central Administration under the PerformancePoint Services management page.

PerformancePoint_unattended_service_ (1) PerformancePoint_unattended_service_ (2) PerformancePoint_unattended_service_ (3) PerformancePoint_unattended_service_ (4)

Here is how to do it: Configure the unattended service account for PerformancePoint Services

Check also: http://ericphan.net/blog/2010/4/19/performance-point-server-error-the-unattended-service-accoun.html

Why can’t SharePoint Dashboard Designer 2013 connect to SQL Analysis Services 2012?

By default when running Dashboard Designer and trying to connect to an SQL Server 2012 Analysis Services instance you only get empty values. If you go to Windows Event Viewer you can find out the reason behind this: Dashboard Designer 2013 needs ADOMD.net version 10 support!

Here is some more information on the matter:

Why can’t SharePoint Dashboard Designer 2013 connect to SQL Analysis Services 2012?
Can’t get SSAS databases to appear in Performance Point Dashboard Designer? Check you ADOMD.net version!

This is definitely something that Microsoft should have taken care of in advance. It turns out that Dashboard Designer for 2013 was written for the SQL 2008 R2 Analysis Services clients. So you need to download and add client support for ADOMD.net version 10 from http://www.microsoft.com/en-us/download/details.aspx?id=16978

ado2008

FINAL CHAPTER: Tips n’ Tricks and Troubleshooting

Enable DAX queries with Report Builder

http://sqlblog.com/blogs/marco_russo/archive/2012/01/05/use-parameters-in-your-dax-queries.aspx

Remember to use Data Tools instead of Report Builder since the dialog is not available there.

Publish a Report to a SharePoint Library

http://technet.microsoft.com/en-us/library/bb283155.aspx

SharePoint 2013: The server was unable to save the form at this time

http://www.peneveyre.com/blog/PermaLink,guid,eeacb91f-8cdb-49d1-a59a-59de40ffe18f.aspx

http://sharepointerthilosh.blogspot.fi/2013/03/the-server-was-unable-to-save-form-at.html

 


Powershell meets TM1 again: List all the TM1 objects that exist on one server but not on the another

Now that I’ve fallen in love with Powershell I made effort to have yet another useful (?) script to use along with TM1 model deployments. When you are deep in the development phase of your project you end up adding and deleting objects all the time. At the same time when you use file-based deployment (copy datafiles from server to another) you might end up in situation where your target server has old objects that don’t exist in the source anymore (they’ve been deleted or renamed).

There is no fast built-in feature to check which objects in target environment (production?) doesn’t exist in the source (development?) no more. Here is a piece of Powershell script that gives you a list of all the TM1 objects that exist on one server but not on the another

$datafilesfolderA = ""
$datafilesfolderB = ""

Compare-Object (gci $datafilesfolderA | Where-Object {$_.Name -like "*.pro" -OR $_.Name -like "*.cho" -OR $_.Name -like "*.cub" -OR $_.Name -like "*.dim" -OR $_.Name -like "*.rux"} | Where-Object {$_.Name -notmatch "}"} | Select-Object name) (gci $datafilesfolderB | Where-Object {$_.Name -like "*.pro" -OR $_.Name -like "*.cho" -OR $_.Name -like "*.cub" -OR $_.Name -like "*.dim" -OR $_.Name -like "*.rux"} | Where-Object {$_.Name -notmatch "}"} | Select-Object name) -Property Name