Oracle Business Intelligence(OBI) Best Practices

Lets get back to some useful information that we as developer need to take care of while doing our business with OBIEE as tool …. I have compiled some best practices that is true in my own view (might not be true from your end ! in that case put some comments ).
Definitely there are loads of other Best practices stuff we need to be adhered while doing our best during development / configuration and there will be tons added with some new release of OBIEE .I will promise to keep this thread up-to date whenever I will get information as it is easy to me to update here and keep this here so that there will be no additional overhead of managing the new information in documents . Keep watching this thread !!! Cheers :-)
Voila….
OBIEE BEST PRACTICES GUIDELINES
Repository ‐ Physical Layer
Connection Pool
1.Use individual database for every project and also specify the meaningful name for it
2.Follow proper naming convention to the database object & connection pool as per the project/business unit.
3.Use optimized number of connection pools, maximum connections for each pool ,shared logon etc.
4.Do not have any connection pools which are unable to connect to the databases as this might lead to BI server crash as it continuously ping to the connection.
5. It is advised to have a dedicated database connection for OBI which can read all the required schemas and which never expires
6.Any improper call interface should not be in the connection pool
7.Ensure to check “Execute queries asynchronously” option in the connection pool details
Others
1.Define proper keys in physical layer
2.Do not import foreign keys from Database
3.Specify the intelligent cache persistence time depending on any physical table refreshing period
4.Avoid using Hints at the Physical table Level.
5.Use aliases to resolve all circular joins
6.Avoid to create any views (SQL based objects) in the physical layer unless it is necessary
7.There should be no fact table to fact table joins
8.Use consistent naming conventions across aliases
9.Do not use a separate alias for degenerate facts
10.Search and destroy triangle joins
11.Always define your catalog under projects (typically useful for Multi User Development environment and repository merging)
Repository Design ‐ BMM
Hierarchy
1.Ensure each level of hierarchy has an appropriate number of elements e e e ts and level key.
2.No level key should be at Grand total level
3.A hierarchy should only have one grand total level
4.Lowest level of the hierarchy should be same as lowest grain of the Dimension Table. lowest level of a dimension hierarchy must match the primary key of its corresponding dimension logical tables
5.All the columns in a hierarchy should come from one logical table
6.All hierarchies should have a single root level and a single top level.
7.If a hierarchy has multiple branches, all branches must have a common beginning point and a common end point.
8.No column can be associated with more than one level.
9.If a column pertains to more than one level, associate it with the highest level it belongs to.
10.All levels except the grand total level must have level keys.
11.Should not be unnecessary keys in hierarchy
12.Optimizes dimensional hierarchy so that it do not span across multiple logical dimension tables
Aggregation
1.All aggregation should be performed from a fact logical table and not from a dimension logical table.
2.All columns that cannot be aggregated should be expressed in a dimension logical table and not in a fact logical table
3.Non‐aggregated columns can exist in a logical fact table if they are mapped to a source which is at the lowest level of aggregation for all other dimensions
4.Arrange dimensional sources in order of aggregation from lowest level to highest level
Others
1. Modelling should not be any report specific, it should be model centric
2.Joins between logical facts and logical dimensions should be complex(intelligent) i.e. 0,1:N not foreign key joins
3.It is advised to have a logical star in the business model
4.Combine all like attributes into single dimensional logical table.For e.g. Never put product attributes in customer dimension
5.Every Logical Dimension should have a hierarchy declared even if it only consists of a Grand Total and a Detail Level.
6.Never delete logical columns that map to keys of physical dimension tables
7. Explicitly declare content of logical table sources
8. Proper naming convention to be followed for logical tables and columns
9.Avoid assigning logical column same name as logical table or subject area
10.Configure the content/levels properly for all sources to ensure that OBI generates optimised SQL
11. Avoid to apply complex logic at the “Where Clause” filter.
12.Level based or derived calculations should not be stored in Aggregated tables
13. Explicitly declare the content of logical table sources, especially for logical fact tables.
14. Create separate source for dimension extensions.
15. Combine Fact extension into main fact source
16. Separate mini‐dimensions from large dimensions as different logical tables – keeping options open on deleting large dimensions
17. If multiple calendars and time comparisons for both are required (e.g. fiscal and Gregorian), then consider separating them into different logical dimension tables – simplifies calendar table construction
18. Each logical table source of a fact logical table needs to have an aggregation content defined. The aggregation content rule defines at what level of granularity the data is stored in this  fact table. For each dimension that relates to this fact logical table, define the level of granularity, ensuring that every related dimension is defined.
19. Delete unnecessary physical column from here
20. Rename logical column here so that all referenced presentation column would changes cascaded
Logical Joins
1. If foreign key logical joins are used, the primary key of a logical fact table should then be comprised of the foreign keys. If the physical table does not have a primary key, the logical key for a fact table needs to be made up of the key columns that join to the attribute tables .
2.Look at the foreign keys for the fact table and find references for all of the dimension tables that join to this fact table.
3. Create the logical key of the fact table as a combination of those foreign keys.
Presentation Layer
1. It should be simple Break out complex logical models into simple, discrete, and manageable subject areas.
2.Expose most important facts and dimensions
3. All columns should be named in business‐relevant terms, NOT in physical table/column terms
4.Proper Naming Convention for all tables and columns by Initial cap Labellings.
5.Do not combine tables and columns from mutually incompatible logical fact and dimension tables
6. Ensure that aliases for presentation layer columns and tables are not used unless necessary. Verify that reports do not use the aliases
7.End‐Users should not get any errors when querying on any two random columns in a well designed presentation layer
8. Each Catalog should have the description which will be visible from Answers
9. Each Presentation column should have description visible from answers on mouse hover to it
10. Delete unnecessary columns of BMM in presentation layer
11. Avoid naming catalog folders same as presentation tables
12.  If the presentation catalog is in Tree like folder structure(main and sub folders), then place a dummy measure in the main catalog folder.
13. Avoid to set permissions to tables or groups unless necessary
14. If presentation table is in tree like structure then place a dummy column as ‘_’ to enforce proper table sorting . This also help in merging activities
15. Separate numeric and non‐numeric quantities into separate folder sets. Mark “Facts” or “Measures” for column having Aggregation rules
16. Detailed presentation catalogs should include measures from one fact table only as a general rule, as the detailed dimensions (e.g. degenerate facts) are non‐conforming with other fact tables
17. Overview catalogs dimensionality is intersection of the conforming dimensions of the base facts
18. Do not use any special characters(‘$‘,’%’,’&’,’_’,’’’ etc.) for naming convention in Presentation Layer and also for Dashboards
Others
RPD Security
1.Use Externalized security for user‐group association to roll‐out large number of users
2.Users should not be stored inside the repository
3.Use template groups (i.e. security filters with session variables) to minimize group proliferation
4. Limit online repository updates to minor changes.
5. For major editing take backup copy of the repository, and edit the repository in Offline mode.
6. Use naming convention for initialization block and variable for ease of maintenance
7. Follow proper migration strategies
Report Design
Shared Folders
1. Each project or business unit will be given a dedicated shared folder on the catalog to create/save the corresponding report developments.
2.Any project specific work is not supposed to be saved in “My Folders”.
3.Dashboard, Page Name , Report Name , Web Groups should be saved to relevant business area shared folders with proper and easily identifiable naming conventions
4.Each Dashboard , Page , Report should have descriptions
Interactive Dashboard
1.Compact and balanced and Feature Rich
2.Do not use any special characters(‘$‘,’%’,’&’,’_’,’’’ etc.) for naming convention in Dashboards/reports.
3. Try to avoid complex pivot tables.
4. It’s not recommended to use Guided Navigation which effects the report performance.
5. Use single GO button for g all the prompts in the report
6.Apply the hidden column format as the system‐wide default for these preservation measures
7.Name should be meaningful for business
8. Each report can have title definition
9. Do role based personalization wherever applicable
10. Answers access should be restrictive to group of users via privilege control by BI Administrator
11. Apply filter with some default value to avoid high response time
12. Avoid drop down list for filters for large set of distinct values
13. For date calendar column place it in pivot rather tabular data show
14. Always try to put a single Go
15. Make Drill in place in dashboard for Drilled down reports
16. Put the Download ,Refresh , Print link across all reports
17. 2‐D charts are easier to read than 3‐D counterpart typically for bar charts
18. Use standard dashboard layout across all pages
19. Try to keep only 3 to 4 messages per page
20. Remember people read from left to right & top to bottom
21. Always keep dashboards symmetrical, balanced and visually appealing
22. Augment basic reports with Conditional Formatting
23. Always leverage Portal Navigation, Report Navigation & Drill‐down
24. Create Visibility Roles
25. Place Filter Views underneath Title views
26. Don’t Show Detailed Filters (They look cluttered)
27. Use standard saved templates to import formatting and apply the layout
28. Use region/section collapsible features .
29. Use View selector to allow same data to be replicate across several view
30. Avoid horizontal scrolling of dashboard page
31. Remember that you can embed folders or groups of folders
32. Always Drill/Navigate from summary to detail, top to bottom
33. Always try to leverage each request view within an application or  demonstration (Table, Chart, Pivot Table, Ticker, Narrative,Filter, etc.)
Performance
1. Should have no Global Consistency Errors and Warnings
2.Metadata Repository size to be Reduced to the possible Extent by removing the unused Objects
3.  Optimized settings for BI server and Web server configuration file parameters
4. Applying Database hints for a table in the Physical layer of the Repository
5. Reduce the SELECT statements executes by the OBIEE Server
6.  Limit the generation of lengthy physical SQL queries by Modeling dimension hierarchies (Hierarchy ( Drill Key and Level Key)
7. Disable or remove the unused initialization blocks and also reduce the number of init blocks in the Repository
8. Set the correct and recommended log level in the Production, setting the Query Limits and turn off logging
9. Push the complex functions to be performed at the database side
10. Good Caching and Purging strategy(Web server cache and Siebel Analytic Server Cache)

IIS and SAW in two Different Server – OBIEE

So far what we observed is that in typical environment we have both IIS and Siebel Analytics Web (SAW) in the same machine ,if we at all consider that SAS could be at different server .
We have had a typical requirement where we need to segregate both IIS and SAW into two different servers .Its not a big challenge but believe me the configuration to achieve this could not possible if you dont know how to tweak this across windows settings and possibly others using non-windows environment .
So lets assume I have two host : aphbit05 and aphbit06 .My IIS at aphbit05 and my SAS and SAW up and running at aphbit06 .
I have setup a virtual directory at aphbit05 IIS so that it will point to …\\aphbit06\web\app (please see the Virtual directory setup process put into my previous threads) .
Here it has been assumed that the web folder has been shared into network on host aphbit06 keeping in mind that the OS , Network and IIS permission allow the access .
I have had Siteminder in both servers so lets make sure that the Siteminder agent is turned off in both by tuning the parameter as per attached .
Siteminder Off
Change the Windows registry settings of  aphbit05 server as below :
[HKEY_LOCAL_MACHINE\SOFTWARE\Siebel Systems, Inc.\Siebel Analytics\Web\7.8\ISAPI]
“ServerConnectString”=”sawtcp://<IP of aphbit06>:9710″@=”"
(Though the host name works but sometime I prefer to use the IP ) It looks like the below changes . Note that 9710 port is default for IIS to communicate with SAW.
Registry ISAPI change

Be careful about the below settings in instanceconfig.xml file at aphbit06 (if your have different skin across 2 servers and if you have SAW also at host aphbit05 but disabled )
<DefaultStyle> </DefaultStyle>
<DefaultSkin> </DefaultSkin>
Use below tag at instanceconfig.xml file to specify the local directory resource you are trying to access (considerin web/app/res is at C-drive)
<URL>
<ResourcePhysicalPath>c:\SiebelAnalytics\Web\App\Res</ResourcePhysicalPath>  
</URL>
Also ensure that the Windows user running the IIS and SAW server process has full permissions on this directory.Now lets restart the IIS at aphbit05 and SAS – SAW at aphbit06 .
Its seem that the blue moon Siebel Analytics login page appears while trying to access the URL .
So all Web server related request will be load balanced by aphbit05 while aphbit06 will be engaged to perform its analytical processing business .
N.B : The above configuration and settings works on Siebel Platform version 7.8.5 but not tested on latest OBIEE 10.1.3.4 release . Also how the process valid on non-windows OC4J env is not tested .

Integrating BI Publisher with OBIEE Presentation Service

1)      Login to BI Publisher using ‘Administrator’ / ‘Administrator’ and go to ‘Admin’ tab .
2)      Go to ‘Security Configurations’ .
3)      Check the option ‘ Enable Local Superuser’ and put the superuser name as ‘admin’ and password as ‘admin’ . This will make a option open so that you need not to change the SECURITY_MODEL parameter value to ‘XDO’ and ‘BI_SERVER’ to switch from local BI Publisher and BI Server based security.
4)      Now change the Security model as per the below so that BI Publisher can talk with Presentation Services i.e Oracle BI Server . After changing click on Apply.
Integrating Publisher1
5)      Restart the OC4J daemon.
6)      Sign-In to BI Publisher once again and you can get the below connection created under Admin-) JDBC -) Oracle BI EE .
Integrating Publisher2
7)      Now you can able to see the file has been modified with configuration parameters ‘D:\OracleBI\xmlp\XMLP\Admin\Configuration\xmlp-server-config.xml’
Integrating Publisher3
8.)  Now try to login using BI Server Administrator password . In my case it is ‘newpass9’ and you can get below screen. Note that here ‘Admin’ tab is missing which we will make visible after some more configuration settings.
Integrating Publisher4
9)      You can get the same screen once you come from ‘BI Publisher’ option of Presentation Administrator. So now it is integrated but no Subject area no Shared Folder has been created and nothing has been setup it .Lets do that in next step.
Integrating Publisher5
Integrating Publisher6
10)      Create a User group called ‘ XMLP_ADMIN’ in repository and assign the permission to be  ‘Administrator’ .
11)      Now Sign-In to BI Publisher using BI Server login credentials like ‘Administrator’/newpass9 .Here you go with Admin tab visible with the folders as well.
Integrating Publisher7
12)      Click on any of the shared folder and you can get the option to create new report .Here I am creating a new report based on existing subject area is ‘testdeba’ .
Integrating Publisher8
13)      Edit ‘testdeba’ and create a new datamodel with default datasource set as ‘Oracle BI EE’ .
Integrating Publisher9
14)      Click on ‘Query Builder’ and select the Schema i.e. the Subject area and select region.
15)      Click on Save , then View and then Analyzer which will show below report after drag and drop left pane columns to respective regions .
Integrating Publisher10
16) So now the OBIEE Publisher is ready to work and details of how publisher would work is out of scope of this post. More more details refer the BI publisher Admin and User guide from Oracle.
N.B : Some of this configuration task has been already done by Oracle in latest 10.1.3.4.0 release onwards so there will no more manual interaction to handle the config parameters which is indeed good for newbies and pretty cool for experienced .Yet I think this will be helpful specially to know how the interaction managed behind the screen 

BI Publisher Module Deploy and Undeploy – OBIEE 10.1.3.3

Deploying the BI Publisher Module
Below is my OC4J BI Home location . Execute this as per your environment
From command window : cd D:\OracleBI\oc4j_bi\j2ee\home
Copy : ‘Oracle_Business_Intelligence_Publisher’ folder from ‘..\OBI 10.1.3.3.2 Installer dir\Server_Ancillary’ to D drive.
Execute below :
java -jar admin_client.jar deployer:oc4j:sryndent9g oc4jadmin oc4jadmin -deploy -file D:\Oracle_Business_Intelligence_Publisher\oc4j\xmlpserver.ear -deploymentName xmlpserver -bindAllWebApps
N.B : sryndent9g is hostname and second ‘oc4jadmin’ is my password for oc4jadmin enterprise management console .( http://sryndent9g:9704/em/)
Publisher 1
Undeploy the BI Publisher Module
cd D:\OracleBI\oc4j_bi\j2ee\home
java -jar admin_client.jar deployer:oc4j:sryndent9g oc4jadmin oc4jadmin -undeploy xmlpserver
XMLP Configuration Parameter Changes
Until and unless you change the default installation parameters you are not able to log in to BI Publisher
Lets change the required parameters:
1) Edit ‘xmlp-server-config.xml‘ from path D:\OracleBI\oc4j_bi\j2ee\home\applications\xmlpserver\xmlpserver\WEB-INF’ .Change the file path as below :    <file path=”D:\OracleBI\xmlp\XMLP”/>
2) Edit ‘xmlp-server-config.xml ‘ from path ‘D:\OracleBI\xmlp\XMLP\Admin\Configuration’ .Change as <property name=”SECURITY_MODEL” value=”XDO”/> . This would basically change the security model from BI Server security to BI Publisher Security. The security model uses the configuration role mapping defined under file path D:\OracleBI\xmlp\XMLP\Admin\Security\principals.xml. The encrypted password with default installer represents ‘Administrator’ always.
3) Lets restart the OC4J service and using default password for BI Publisher as ‘Administrator’ log into the system .
Publisher 2
And you will get below :
Publisher 3
Okay now BI Publisher setup has been done for Publisher based security. Lets Sign Out .
Now login to BI Presentation services using <host>:9704/analytics/saw.dll? URL and Click on BI Publisher from More products .
This would throw below error :
Oracle BI Publisher Enterprise Reporting Login: Login failed: Please contact administrator for your username/password.Error Details
This is for obvious reason because local BI Publisher authentication is not based on BI Server security mode rather default BI publisher XDO security mode and here Presentation service operates only on BI Server authentication mode. So the compatibility for authentication has been failed .
Lets make some more config changes to make it work in integrated mode followed by my next post .
N.B : OBIEE 10.1.3.4 onwards there is no more hassles about deploying publisher module as full installation take all this configuration under consideration until and unless you remove Publisher module during customized installation.
Although the above material helps to understand how actually the module get installed and pretty handy for your further investigation .

"in between" filters for MDX sources

Fiddling around with some more functional options we've all come to know, like and use frequently, I found that "in between" filters in answers requests going against MDX sources (Essbase, MS Analysis server etc) don't work as expected.

To showcase this, I have created a simple request. Planning and Current numbers by Fiscal Month within Europe. And I'm interested in all months between Q1 2009 and Q4 2009:




Checking the results tab we see that it's not necessarily what we'd expect:




Doing a full data scroll reveals that basically all Fiscal Month members are being pulled up.

Here's the MDX from the log:

With 
set [Geo3] as 'Filter([Geo].Generations(3).members, (([Geo].CurrentMember.MEMBER_ALIAS = "Europe" OR [Geo].CurrentMember.MEMBER_Name = "Europe")))'
set [Time Period3] as 'Filter([Time Period].Generations(3).members, ( NOT (([Time Period].CurrentMember.MEMBER_ALIAS < "Q1-09" OR [Time Period].CurrentMember.MEMBER_Name < "Q1-09"))) AND ( NOT (("Q4-09" < [Time Period].CurrentMember.MEMBER_ALIAS OR "Q4-09" < [Time Period].CurrentMember.MEMBER_Name))))'
set [Time Period4] as 'Generate({[Time Period3]}, Descendants([Time Period].currentmember, [Time Period].Generations(4),SELF), ALL)'
member [Scenario].[MS1] as 'AGGREGATE(crossjoin {[Geo3]},Scenario.[Current])', SOLVE_ORDER = 100
member [Scenario].[MS2] as 'AGGREGATE(crossjoin {[Geo3]},Scenario.[Planned])', SOLVE_ORDER = 100
select 
{ [Scenario].[MS1],
[Scenario].[MS2],
[Scenario].[MS3]
} on columns,
NON EMPTY {{[Time Period4]}} properties ANCESTOR_NAMES, GEN_NUMBER on rows 
from [Sample.Sample]


And here's the important bit:

set [Time Period3] as 'Filter([Time Period].Generations(3).members, ( NOT (([Time Period].CurrentMember.MEMBER_ALIAS < "Q1-09" OR [Time Period].CurrentMember.MEMBER_Name < "Q1-09"))) AND ( NOT (("Q4-09" < [Time Period].CurrentMember.MEMBER_ALIAS OR "Q4-09" < [Time Period].CurrentMember.MEMBER_Name))))'

I've opened an SR for this since there's no mentioning on metalink, the release notes or the new features guides.

Venkats magic box on multiple hierarchy reporting

Leave it to Doctor Venkat to sum this one up so nicely:

http://www.rittmanmead.com/2009/07/13/oracle-bi-ee-10-1-3-4-1-multi-hierarchy-reporting-shared-parents-and-shared-childs-fragmentation/

Special characters in EVALUATE-wrapped MDX

Something I ran into recently is a little bug in the way OBIEE 10.1.3.4.0 produces MDX out of an EVALUATE wrapper when the MDX contains special characters. (yes, it is a bug)

Let's start with a basic and pure OBIEE report. We want to see the average coverage % by the 3rd generation of our product hierarchy:




Now let's switch out the presentation column for an EVALUATE function with MDX as what we really want is the average coverage % for the scenario "Actual":



EVALUATE('(%1.dimension.currentmember, [Scenario].[Actual],[Account].[Coverage % Avg]).value' as INTEGER, Product."Gen3,Product")

Let's run it.



"Unknown Member Coverage % Avg"? Liar. I see it directly in front of me. Off to checking the log:

With 
set [Product3] as 'Generate([Product].Generations(2).members, Descendants([Product].currentmember, [Product].Generations(3), leaves))'
member [Account].[MS1] as '([Product].Generations(3).dimension.currentmember,[Scenario].[Actual],[Account].[Coverage % Avg])).value'
member [Account].[MS2] as 'RANK([Product].Generations(3).dimension.currentmember,[Product].Generations(3).members)'

select 
{MS1,
MS2} on columns,
{{[Product3]}} properties ANCESTOR_NAMES, GEN_NUMBER on rows 
from [MyCube.Ess931_ASO]

-------------------- Query Status: Query Failed: Essbase Error: UnknownMember Coverage % Avg used in query

Do you see the difference? While on the guy, it states "Coverage % Avg", the log and the MDX which is actually fired, reference "Coverage % Avg". TWO spaces!
So let's work around this by changing the EVALUATE to include the "StrToMbr" function:



EVALUATE('(%1.dimension.currentmember, [Scenario].[Actual],StrToMbr("[Account].[Coverage % Avg]")).value' as INTEGER, Product."Gen3,Product")

When running this, we get the correct results again:



So if you want to reference members containing special characters, use StrToMbr since otherwise you will get a non-functioning MDX from the BI server.

End of line

Dynamic multi-level filtering with MDX

Following up on my Essbase ranking post and a question from OTN, I’d like to elaborate on the ranking and filtering of the rank results.

Again, we start with a basic report showing the top10 products by actual sales. We’re not using OBIEE functions due to the performance constraints mentioned in the first article.



The core MDX function retrieving the dimension members being:
EVALUATE('TOPCOUNT(%1.members,10,[Actual])',Product."Gen6,Product")



Business now wants to implement filters on Gen2 through Gen5 of the product hierarchy in order to be able to more specifically choose their top10 products. This normally means four “is prompted” filters and a dashboard prompt containing Gen2 through 5 as hierarchically constrained prompt column. Let’s keep it simple and try one of the generations as a fixed filter before creating the prompts and testing in a dashboard context.





As already stated, the grain of the query changes to the one of the filter and renders the whole request useless.

MDX without filter:
With
set [Account3] as 'Filter([Account].Generations(3).members, (([Account].CurrentMember.MEMBER_ALIAS = "Net Sales" OR [Account].CurrentMember.MEMBER_Name = "Net Sales")))'
set [Market3] as 'Filter([Market].Generations(3).members, (([Market].CurrentMember.MEMBER_ALIAS = "Europe" OR [Market].CurrentMember.MEMBER_Name = "Europe")))'
set [Time Period3] as 'Filter([Time Period].Generations(3).members, (([Time Period].CurrentMember.MEMBER_ALIAS = "Q4-09" OR [Time Period].CurrentMember.MEMBER_Name = "Q4-09")))'
set [Evaluate0] as '{TOPCOUNT([Product].Generations(6).members,10,[Actual]) }'
member [Scenario].[MS1] as 'AGGREGATE(crossjoin ({[Account3]},crossjoin ({[Market3]},{[Time Period3]})),Scenario.[Actual LY])', SOLVE_ORDER = 100
member [Scenario].[MS2] as 'AGGREGATE(crossjoin ({[Account3]},crossjoin ({[Market3]},{[Time Period3]})),Scenario.[Actual])', SOLVE_ORDER = 100
select
{ [Scenario].[MS1],
[Scenario].[MS2]
} on columns,
NON EMPTY {{[Evaluate0]}} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from [MyCube_A.MyCube]

MDX with filter:
With
set [Account3] as 'Filter([Account].Generations(3).members, (([Account].CurrentMember.MEMBER_ALIAS = "Net Sales" OR [Account].CurrentMember.MEMBER_Name = "Net Sales")))'
set [Market3] as 'Filter([Market].Generations(3).members, (([Market].CurrentMember.MEMBER_ALIAS = "Europe" OR [Market].CurrentMember.MEMBER_Name = "Europe")))'
set [Product4] as 'Filter([Product].Generations(4).members, (([Product].CurrentMember.MEMBER_ALIAS = "SMU PF - Business Products" OR [Product].CurrentMember.MEMBER_Name = "SMU PF - Business Products")))'
set [Time Period3] as 'Filter([Time Period].Generations(3).members, (([Time Period].CurrentMember.MEMBER_ALIAS = "Q4-09" OR [Time Period].CurrentMember.MEMBER_Name = "Q4-09")))'
set [Evaluate0] as '{TOPCOUNT([Product].Generations(6).members,10,[Actual]) }'
member [Scenario].[MS1] as 'AGGREGATE(crossjoin ({[Account3]},crossjoin ({[Market3]},crossjoin ({[Product4]},{[Time Period3]}))),Scenario.[Actual LY])', SOLVE_ORDER = 100
member [Scenario].[MS2] as 'AGGREGATE(crossjoin ({[Account3]},crossjoin ({[Market3]},crossjoin ({[Product4]},{[Time Period3]}))),Scenario.[Actual])', SOLVE_ORDER = 100
select
{ [Scenario].[MS1],
[Scenario].[MS2]
} on columns,
NON EMPTY {{[Evaluate0]}} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from [MyCube_A.MyCube]

Lets' forget about OBIEE filtering. This means that we have to do the filtering within the MDX and make that filter respond to the prompts. For this we will use presentation variables and a new MDX statement:



EVALUATE('TOPCOUNT(Intersect(%1.dimension.members,Descendants([@{vPromptedProdHier}{SMU}],[Product].[Gen6,Product])),10,[Actual])',Product."Gen6,Product")

And here's the prompt to go with it:



The multiple usage of the same presentation variable ensures that we will always have the lowest level currently chosen populated in the hierarchy, which will nicely adapt our MDX statement to constrain the results to the descendants of the chosen dimension member.

Family:



With
set [Account3] as 'Filter([Account].Generations(3).members, (([Account].CurrentMember.MEMBER_ALIAS = "Net Sales" OR [Account].CurrentMember.MEMBER_Name = "Net Sales")))'
set [Market3] as 'Filter([Market].Generations(3).members, (([Market].CurrentMember.MEMBER_ALIAS = "Europe" OR [Market].CurrentMember.MEMBER_Name = "Europe")))'
set [Time Period3] as 'Filter([Time Period].Generations(3).members, (([Time Period].CurrentMember.MEMBER_ALIAS = "Q4-09" OR [Time Period].CurrentMember.MEMBER_Name = "Q4-09")))'
set [Evaluate0] as '{TOPCOUNT(Intersect([Product].Generations(6).dimension.members,Descendants([SMU PF - Business Products],[Product].[Gen6,Product])),10,[Actual]) }'
member [Scenario].[MS1] as 'AGGREGATE(crossjoin ({[Account3]},crossjoin ({[Market3]},{[Time Period3]})),Scenario.[Actual LY])', SOLVE_ORDER = 100
member [Scenario].[MS2] as 'AGGREGATE(crossjoin ({[Account3]},crossjoin ({[Market3]},{[Time Period3]})),Scenario.[Actual])', SOLVE_ORDER = 100
select
{ [Scenario].[MS1],
[Scenario].[MS2]
} on columns,
NON EMPTY {{[Evaluate0]}} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from [MyCube_A.MyCube]

Group:



With
set [Account3] as 'Filter([Account].Generations(3).members, (([Account].CurrentMember.MEMBER_ALIAS = "Net Sales" OR [Account].CurrentMember.MEMBER_Name = "Net Sales")))'
set [Market3] as 'Filter([Market].Generations(3).members, (([Market].CurrentMember.MEMBER_ALIAS = "Europe" OR [Market].CurrentMember.MEMBER_Name = "Europe")))'
set [Time Period3] as 'Filter([Time Period].Generations(3).members, (([Time Period].CurrentMember.MEMBER_ALIAS = "Q4-09" OR [Time Period].CurrentMember.MEMBER_Name = "Q4-09")))'
set [Evaluate0] as '{TOPCOUNT(Intersect([Product].Generations(6).dimension.members,Descendants([SMU PG - BP Video],[Product].[Gen6,Product])),10,[Actual]) }'
member [Scenario].[MS1] as 'AGGREGATE(crossjoin ({[Account3]},crossjoin ({[Market3]},{[Time Period3]})),Scenario.[Actual LY])', SOLVE_ORDER = 100
member [Scenario].[MS2] as 'AGGREGATE(crossjoin ({[Account3]},crossjoin ({[Market3]},{[Time Period3]})),Scenario.[Actual])', SOLVE_ORDER = 100
select
{ [Scenario].[MS1],
[Scenario].[MS2]
} on columns,
NON EMPTY {{[Evaluate0]}} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from [MyCube_A.MyCube]

End of line.

Showing numbers as string while retaining the thousand separator

Something that came up on OTN:
Cast a metric (number) as char but still have the thousand separator. E.g.: if "Sales" > 1000 then "Nothing to report" else 1'000?

Here's a formula you can use:

case when (LENGTH(cast(Fact.Sales as char)) > 12) then
(SUBSTRING(cast(Fact.Sales as char) FROM LENGTH(cast(Fact.Sales as char)) - 15 FOR 3) || '''')
else '' END ||
case when (LENGTH(cast(Fact.Sales as char)) > 9) then
SUBSTRING(cast(Fact.Sales as char) FROM LENGTH(cast(Fact.Sales as char)) - 12 FOR 3) || ''''
else '' END ||
case when (LENGTH(cast(Fact.Sales as char)) > 6) then
SUBSTRING(cast(Fact.Sales as char) FROM LENGTH(cast(Fact.Sales as char)) - 9 FOR 3) || ''''
else '' END ||
case when (LENGTH(cast(Fact.Sales as char)) > 0) then
SUBSTRING(cast(Fact.Sales as char) FROM LENGTH(cast(Fact.Sales as char)) - 6 FOR 7)
else 'Nothing to report' END

End of line.