Showing posts with label Obiee All. Show all posts
Showing posts with label Obiee All. Show all posts

Adding Default Rows with OBIEE

When looking at a typical dimensional model, there are cases when the application of one or more dimension tables may not make sense for particular fact rows. We handle these situations with special rows in the dimension table that specify this lack of relationship, sometimes known generically as “default” rows. The three basic cases of default rows are: the “placeholder” row, the “not applicable” row, and the “unknown” row.
The placeholder row is typically seen in cases with late-arriving dimension records. Perhaps a latency issue exists between two source systems, and we aren’t yet aware which dimension row we should apply. So we make a temporary association with a placeholder row until the appropriate dimension row is loaded. Or perhaps we have data quality issues in the source system, and we’ll never know which dimension record should be applied, but we want to load the fact rows anyway. This is usually referred to as the “unknown” dimension row. Finally, a particular dimension may not have a logical application for this row. For instance, an order may not have a particular promotion associated with it. This is typically referred to as the “not applicable” row.
OLTP systems may also make use of default rows to signify the lack of relationship, but more typically, they use a NULL value in the ID field of the lookup table. These differing methodologies can be seen in Oracle’s own sample schemas. In the SH schema, which is modeled after a data warehouse, the rows in the SALES fact table that don’t have a particular promotion in affect at the time of the sale have a link to the PROMOTIONS dimension table via the surrogate key 999.
SQL> select promo_name,
  2  promo_category
  3  from sh.promotions
  4  where promo_id=999;

PROMO_NAME                     | PROMO_CATEGORY
------------------------------ | ------------------------------
NO PROMOTION #                 | NO PROMOTION

1 row selected.

Elapsed: 00:00:00.01
SQL> 
But when looking at the OE schema, which is modeled after a transactional OLTP system, the lack of a promotion is represented with a NULL value in the PROMOTION_ID column:
SQL> select count(*)
  2  from oe.orders
  3  where promotion_id is null;

  COUNT(*)
----------
       105

1 row selected.

Elapsed: 00:00:00.00
SQL> 
Though not as straight-forward as a real star schema, it is possible to mimic the functioning of these default row scenarios in a logical star schema when using OBIEE to report off of an OLTP system. To demonstrate, I’ll use the OE schema, creating a fact table out of the ORDERS table and a dimension table out of the PROMOTIONS table. The first thing I have to do is configure an outer join between the dimension and the fact, which I do in the Business Model:
right join.png
Now, in Answers, we still get results back from the Orders Facts table for those orders that don’t have an applicable promotion (which, for some reason, is all of them in the OE schema), but I wouldn’t say it’s exactly what we are looking for:
first answer result.png
Technically it works, but the NULL values don’t make for a great looking report. Also, as soon as a filter is applied on the dimension table, the outer join will disappear because all of the attributes have NULL values, and any attribute with a NULL value will fall out of the join as soon as it is filtered on. What would be better is if the default rows showed up just like other rows in the dimension table with values explaining the lack of association, such as “No Promotion” in the case at hand. This would also allow filtering on these columns since they no longer contain NULL values. So I place IfNull functions on all the attributes in the dimension table, transforming them to meaningful values:
ifnull.png
Now, I can apply filters to any of the attributes in the Promotions dimension, and it works identically to a default dimension:
answer result with filter.png
We’re almost there. But the only issue now is the use of the filter dialogue in Answers. Answers does not yet know about the value of “No Promotion”, because that transformation does not occur until the join is applied, and the filter dialogue simply queries the Promotions table in isolation. So when checking to see all the choices that we can apply in the filter dialogue, we don’t see our choice for the default row:
filter.png
To resolve this, I need to have the default value as an actual value in the logical dimension table. One way to do this is with a SELECT table that includes a UNION with the default values already existing in the dimension table. To construct this scenario, the physical table would look like this:
sel_promotions.png
Now, instead of doing IfNull functions on all the attributes in the Promotions dimension table, I just modify the join in the physical layer from a foreign key join to a complex join, specifying the IfNull logic:
complex join.png
And now, when I click “All Choices” in the filter dialogue, I see ‘No Promotion’ as a choice, and the join causes the expected results in the Answers report as well:
new filter.png
new answer result with filter.png

Oracle BI EE 10.1.3.4.1 – Currency Conversions & FX Translations – Part 1

One of the common requirements when implementing BI EE is the ability to handle multiple input currencies. This is a pretty common requirement especially in business scenarios where multiple countries/currencies are involved. In such cases, many of the finance related measures like Sales etc will come in local currencies. So, as part of the BI EE setup we need to ensure that such local currency transactions are converted to a common reporting currency. There are 2 types of currency conversions
1. Local Currency to Reporting Currency Conversion – This is the most common requirement where individual transactions are converted into a common Reporting currency and then rolled up for reporting.
2. Reporting Currency Restatements – This generally is a finance requirement where the common input reporting currency(assuming input data itself comes in reporting currency) will have to be analyzed for varying rates. I will cover this in the next blog post.
I will be covering the first requirement in this blog post i.e. converting local currency to reporting currency. I shall be using a modified form of  the Oracle Sample SH schema. The high level physical schematic diagram is given below
Picture 5
Basically every transaction in the SALES fact table is a transaction that was done in the individual countries. For example, this fact table will have a AMOUNT_SOLD of say 100 EUR if the customer of the transaction is from say Belgium(i.e. product was bought in Belgium). The same fact table will also have an AMOUNT_SOLD of say 150 USD if the product was bought by a customer in United States.
Every Country will have a single local currency(USD, EUR, GBP etc). So, basically the Countries table above will have CURRENCY_CODE as an attribute of a Country.
Picture 6
Finally we have a rates table called CURRENCY_RATES which will basically store the daily fluctuating rates. For the purposes of this blog post, i will assume that there is only one common reporting currency which is USD.
Picture 7
There are 2 ways to do currency conversion. I will list them below
1. Do the Rate Multiplication only at the grain of the rates (Time & Customer dimension) and not for every transaction.
2. Do the Rate Multiplication to each and every transaction.
I will discuss both the above techniques here. I generally prefer the first one as in many cases that turns out to be much faster than the other.
Rate Multiplication at Grain of Rates:
Consider the following query
SELECT
CURRENCY_CODE,
COUNTRY_NAME,
A.TIME_ID,
AMOUNT_SOLD
FROM
SALES A,
TIMES B,
CUSTOMERS C,
COUNTRIES D
WHERE
A.TIME_ID = B.TIME_ID AND
A.CUST_ID = C.CUST_ID AND
C.COUNTRY_ID = D.COUNTRY_ID
ORDER BY 2,3
The above query produces the following data.
Picture 8
In our Rates table we have, one rate for every Day/Currency combination
Picture 9
As you see above, we can arrive at the FX rate conversion in 2 ways. Multiply each row in the SALES fact table with the rate and then do the roll-ups. For example,
Picture 11
Or we can Roll-up each transaction at the Day/Currency Level and then multiply with the rates. In plain math, all we are trying to do is
a*d+b*d+c*d = (a+b+c)*d
Picture 12
We start off with doing the latter i.e. do the multiplication once the roll-ups are done (but at the grain of the rates). To implement this in the repository, we need to model the rates as a separate fact table. The RATE measure will take Average (or any aggregation except none as we will always enforce the lower-most level multiplication and hence it does not matter) as the aggregation. Since RATE’s do not conform to other dimensions like Product, Promotions & Channels, we need to assign the measure to the Total level of each of the non-conforming dimensions. Also, we need to assign the RATE to the Day level of the Time dimension and the Country Level of the Customer dimension. This will ensure that we get a unique and the correct rate. To Test whether the rates work correctly, lets create a very simple report as shown below
Picture 15
As you see, we are able to produce the RATE values even for non-conforming CHANNEL dimension. Also, for all the cities within a Country we get the same RATE. This is what assigning of the levels do to the RATE measure. Now create a Logical Column which will multiply the RATE with the Sales Measure
Picture 16
Since we are enforcing the levels in the RATE measure, the same enforcement will happen for the resulting measure as well. Lets create a report as shown below
Picture 19
If we look at the SQL Query, we will notice that the joins of the Rates and the Sales Measure will be enforced only at the RATE grain (through an outer query block) as against every transaction which is not necessary.
WITH
SAWITH0 AS (select D1.c1 as c1,
     D1.c2 as c2,
     D1.c3 as c3,
     D1.c4 as c4,
     D1.c5 as c5,
     D1.c6 as c6,
     D1.c7 as c7,
     D1.c8 as c8,
     D1.c9 as c9
from
     (select sum(T12697.AMOUNT_SOLD) as c1,
               T12623.CHANNEL_CLASS as c2,
               T12638.CUST_CITY as c3,
               T12623.CHANNEL_CLASS_ID as c4,
               T12710.FISCAL_YEAR as c5,
               T13704.COUNTRY_NAME as c6,
               T12710.TIME_ID as c7,
               T12710.FISCAL_YEAR_ID as c8,
               T13704.COUNTRY_ID as c9,
               ROW_NUMBER() OVER (PARTITION BY T12623.CHANNEL_CLASS_ID, T12638.CUST_CITY, T12710.TIME_ID, T13704.COUNTRY_ID ORDER BY T12623.CHANNEL_CLASS_ID ASC, T12638.CUST_CITY ASC, T12710.TIME_ID ASC, T13704.COUNTRY_ID ASC) as c10
          from
               TIMES T12710,
               COUNTRIES T13704,
               CUSTOMERS T12638,
               CHANNELS T12623,
               SALES T12697
          where  ( T12638.COUNTRY_ID = T13704.COUNTRY_ID and T12623.CHANNEL_ID = T12697.CHANNEL_ID and T12638.CUST_ID = T12697.CUST_ID and T12697.TIME_ID = T12710.TIME_ID )
          group by T12623.CHANNEL_CLASS, T12623.CHANNEL_CLASS_ID, T12638.CUST_CITY, T12710.FISCAL_YEAR, T12710.FISCAL_YEAR_ID, T12710.TIME_ID, T13704.COUNTRY_ID, T13704.COUNTRY_NAME
     ) D1
where  ( D1.c10 = 1 ) ),
SAWITH1 AS (select D1.c1 as c1,
     D1.c2 as c2,
     D1.c3 as c3,
     D1.c4 as c4,
     D1.c5 as c5,
     D1.c6 as c6
from
     (select avg(T17009.RATE) as c1,
               T12710.FISCAL_YEAR as c2,
               T13704.COUNTRY_NAME as c3,
               T12710.TIME_ID as c4,
               T12710.FISCAL_YEAR_ID as c5,
               T13704.COUNTRY_ID as c6,
               ROW_NUMBER() OVER (PARTITION BY T12710.TIME_ID, T13704.COUNTRY_ID ORDER BY T12710.TIME_ID ASC, T13704.COUNTRY_ID ASC) as c7
          from
               TIMES T12710,
               COUNTRIES T13704,
               CURRENCY_RATES T17009
          where  ( T12710.TIME_ID = T17009.RATE_DATE and T13704.CURRENCY_CODE = T17009.FROM_CURRENCY )
          group by T12710.FISCAL_YEAR, T12710.FISCAL_YEAR_ID, T12710.TIME_ID, T13704.COUNTRY_ID, T13704.COUNTRY_NAME
     ) D1
where  ( D1.c7 = 1 ) ),
SAWITH2 AS (select D1.c1 as c1,
     D1.c2 as c2,
     D1.c3 as c3,
     D1.c4 as c4,
     D1.c5 as c5,
     D1.c6 as c6,
     D1.c7 as c7,
     D1.c8 as c8,
     D1.c9 as c9,
     D1.c10 as c10
from
     (select SAWITH0.c2 as c1,
               case  when SAWITH0.c5 is not null then SAWITH0.c5 when SAWITH1.c2 is not null then SAWITH1.c2 end  as c2,
               case  when SAWITH0.c6 is not null then SAWITH0.c6 when SAWITH1.c3 is not null then SAWITH1.c3 end  as c3,
               SAWITH0.c3 as c4,
               SAWITH0.c1 as c5,
               SAWITH0.c1 * SAWITH1.c1 as c6,
               case  when SAWITH1.c4 is not null then SAWITH1.c4 when SAWITH0.c7 is not null then SAWITH0.c7 end  as c7,
               SAWITH0.c4 as c8,
               case  when SAWITH0.c8 is not null then SAWITH0.c8 when SAWITH1.c5 is not null then SAWITH1.c5 end  as c9,
               case  when SAWITH1.c6 is not null then SAWITH1.c6 when SAWITH0.c9 is not null then SAWITH0.c9 end  as c10,
               ROW_NUMBER() OVER (PARTITION BY SAWITH0.c2, SAWITH0.c3, SAWITH0.c4, case  when SAWITH0.c5 is not null then SAWITH0.c5 when SAWITH1.c2 is not null then SAWITH1.c2 end , case  when SAWITH0.c6 is not null then SAWITH0.c6 when SAWITH1.c3 is not null then SAWITH1.c3 end , case  when SAWITH0.c8 is not null then SAWITH0.c8 when SAWITH1.c5 is not null then SAWITH1.c5 end , case  when SAWITH1.c4 is not null then SAWITH1.c4 when SAWITH0.c7 is not null then SAWITH0.c7 end , case  when SAWITH1.c6 is not null then SAWITH1.c6 when SAWITH0.c9 is not null then SAWITH0.c9 end  ORDER BY SAWITH0.c2 ASC, SAWITH0.c3 ASC, SAWITH0.c4 ASC, case  when SAWITH0.c5 is not null then SAWITH0.c5 when SAWITH1.c2 is not null then SAWITH1.c2 end  ASC, case  when SAWITH0.c6 is not null then SAWITH0.c6 when SAWITH1.c3 is not null then SAWITH1.c3 end  ASC, case  when SAWITH0.c8 is not null then SAWITH0.c8 when SAWITH1.c5 is not null then SAWITH1.c5 end  ASC, case  when SAWITH1.c4 is not null then SAWITH1.c4 when SAWITH0.c7 is not null then SAWITH0.c7 end  ASC, case  when SAWITH1.c6 is not null then SAWITH1.c6 when SAWITH0.c9 is not null then SAWITH0.c9 end  ASC) as c11
          from
               SAWITH0 full outer join SAWITH1 On SAWITH0.c7 = SAWITH1.c4 and SAWITH0.c9 = SAWITH1.c6
     ) D1
where  ( D1.c11 = 1 ) )
select SAWITH2.c1 as c1,
     SAWITH2.c2 as c2,
     SAWITH2.c3 as c3,
     SAWITH2.c4 as c4,
     SAWITH2.c5 as c5,
     SAWITH2.c6 as c6
from
     SAWITH2
order by c1, c2, c3, c4
Though the query above might look big, this performs really well as the multiplication happens only for a select set of records. But there is one downside to this approach though. If we do not have Time or Customer dimension in the report, the currency converted measure will still go at the grain of Day and the Country as shown below
Picture 20
The only way to roll-these up in such cases is to use the Pivot Table(or custom Logical SQL) as there is no capability currently in BI Server to roll-up a level based measure (after enforcing the levels). The other option is to enforce a filter whenever either Time or Customer dimensions are not chosen. The biggest advantage of this method though is in doing Currency Conversions for YTD, MTD measures. In the Case YTD, MTD measures, there might be a requirement to multiply the latest rate for that Month/Year as against multiplying the rate for each day. In such cases, all we need to do is to create Rate YTD, Rate MTD fact tables as shown below
Picture 21
The Rate Measures in each of the separate logical fact tables will be assigned to the Month & Year level respectively.
Picture 22
Picture 23
And their respective Logical Table Source will have the filters applied as shown below (shown for YTD).
Picture 24
This will ensure that we have the capability to determine which rates we need to multiply with what measure.
Rate Multiplication for every transaction:
This method is suited if we have the Rate Stored as an attribute of the Fact Measures themselves (as against a separate fact table with differing grain). But sometimes we might have a need to actually multiply the rates for each and every transaction. In such cases, we can use the approach wherein we bring in the Rates table as an inner-joined table to the main fact(or we can model it as a dimension depending on what is required).
Picture 25
Picture 26
Now if we create a report using this
Picture 27
you will notice that the join is pushed into the main fact table itself. Sometimes this might perform better especially when we apply filters properly. The SQL produced for this is given below
WITH
SAWITH0 AS (select T12623.CHANNEL_CLASS as c1,
     T13704.COUNTRY_NAME as c2,
     sum(T12697.AMOUNT_SOLD * T18227.RATE) as c3,
     T12623.CHANNEL_CLASS_ID as c4,
     T13704.COUNTRY_ID as c5
from
     COUNTRIES T13704,
     CUSTOMERS T12638,
     CHANNELS T12623,
     SALES T12697,
     CURRENCY_RATES_FACT T18227
where  ( T12623.CHANNEL_ID = T12697.CHANNEL_ID and T12638.COUNTRY_ID = T13704.COUNTRY_ID and T12638.CUST_ID = T12697.CUST_ID and T12697.CUST_ID = T18227.CUST_ID and T12697.TIME_ID = T18227.TIME_ID )
group by T12623.CHANNEL_CLASS, T12623.CHANNEL_CLASS_ID, T13704.COUNTRY_ID, T13704.COUNTRY_NAME)
select SAWITH0.c1 as c1,
     SAWITH0.c2 as c2,
     SAWITH0.c3 as c3
from
     SAWITH0
order by c1, c2
The important point to note though is the fact that we cannot easily achieve the MTD & YTD rate conversion functionality that we saw above in the first method. Though it is possible, it will take some amount of work to make it perform well.
As you see both the methods above have their own pros and cons. Of course, in your case the actual scenario might be completely different (like you might have both local currency as well as Reported Currency stored in DW etc) but this should hopefully be useful for people who are starting with a currency conversion requirement in BI EE.

Troubleshooting OBIEE : Diagnostic Files

’m over in Singapore at the moment, and the client I’m working with is looking for some help with troubleshooting Oracle BI EE installations. This is an interesting area because as developers, we generally focus on building new applications rather than supporting existing ones, but if you’re then handing these systems off for other people to support you need to give them some guidance on how to solve problems. So if you’re going to support an Oracle BI EE system, how do you go about troubleshooting them?
If we are considering Oracle BI EE troubleshooting, to me there are three areas we need to cover:
  1. What indicators are there of current and potential problems?
  2. What are some of the common problems we might encounter when supporting Oracle BI EE?
  3. Are there things we can do to minimize the chance of problems occuring?
So to start with the first point, what indicators are there of potential problems? Well the first place I would look would be the various logs that the Oracle BI EE server components generate. The most obvious logs are the BI Server, BI Scheduler and BI Cluster Controller ones:
  • $ORACLEBI/server/log/NQServer.log (the BI Server “alert log” file)
  • $ORACLEBI/server/log/NSQuery.log (the BI Server query log file)
  • $ORACLEBI/server/log/NQSheduler.log (the Delivers/scheduler log file)
  • $ORACLEBI/server/log/NQCluster.log (the Cluster Controller log file)
with the NQServer.log one being the most useful when diagnosing startup and other issues with the BI Server. In the example below, the BI Server can’t start up because there are no available subject areas in the RPD file (in this case, due to developers saving the RPD with consistency errors).
Alert Log Cant Start
If anything goes wrong with the BI Server, or causes it to do something unusual (like not start usage tracking, or not be able to connect to a required external authentication mechanism) it’ll appear in this log. If a user reports that Oracle BI EE is unavailable, particularly if the dashboard interface comes up but it can’t connect to the BI Server, this is the place to look. It also makes sense to check this log file anyway, on a regular basis, in case it’s reporting an issue that users haven’t spotted yet but that might cause problems for you later on.
The other log file that’s worth keeping an eye on is the Presentation Services log file, found at
  • $ORACLEBIDATA/web/log/sawlog0.log
This log file records failed login attempt (useful to see if someone is trying to hack into your system, or you’re having problems authenticating against LDAP or Shared Services, for example), and it also shows issues you might be having with dashboard elements and reports.
Saw Log
An couple of examples of where this particular log file were useful include:
  • On an Oracle BI Applicaitons implementation on AIX, the Presentation Server would crash and segfault on particular dashboard pages, and when we looked in the Presentation Services log we could see that certain pivot table reports had a high level of cells which should normally have been caught by a setting in the instanceconfig.xml file but instead was causing the Presentation Server to crash.
  • When setting up EPM and OBIEE integration, we could see that incoming login requests from the EPM Dashboard were being rejected as the Presentation Server wasn’t properly handling the incoming SSO token.
Together with the NQServer.log file, these two files should be the first place you look if the BI Server or Presentation Server are either failing to start, regularly crashing or exhibiting strange behavior. In addition, if you’re lucky enough to have the Enterprise Manager BI Management Pack, there are various screens within the pack for remotely monitoring the contents of these logs.
Other logs that you will want to keep an eye on include:
  • $ORACLEBIDATA/web/log/javahost/jhost0.log (Java host log)
Javahost Log
This is the log file for the Java Host process, and can be useful in diagnosing problems when setting up the Hyperion Custom Authenticator amongst other things.
The logs generated by the J2EE application server hosting the Presentation Services plug-in can be useful as well. The log files for OC4J and Weblogic 9.2 can be found here:
  • $ORACLEBI/oc4j_bi/j2ee/home/log/server.log
  • $BEA_HOME/user_projects/domains/bi/servers/AdminServer/logs/AdminServer/logs/AdminServer.log
In addition, if you are running the Oracle BI components on Windows, the Application log in Event Viewer shows you when the server components fail to start or shut down abnormally, from what I can see though this is more or less the same information that’s in the NQServer.log file albeit in a centralized place that systems management tools can make use of.
Under Red Hat Linux / Oracle Enterprise Linux, the equivalent utility is the Systems Logs application, and I guess there are similar utilities available for the various Unix equivalents.
Linux Alert Log Viewer
Outside of OBIEE, problems you might encounter might be down to the underlying database or OLAP server either not being available or failing to authenticate the user credentials being passed across by the connection pool, so it makes sense to keep an eye on the Oracle Alert Log, for example, to see if the problem lies with the database or just to spot potential database problems before they hit the user.
So, that’s it for log files and places to diagnose potential problems. Do you know of any others that you find useful on a regular basis? I’m off to work now, but later on I’ll continue this series with some tips on what problems you are likely to encounter on an Oracle BI EE system, how you can do some preventative maintenance to avoid them, and also a high-level view of how to troubleshoot an attached Essbase OLAP server and Shared Services.