Comparing OBIEE Usage Tracking with NQSQUERY.LOG

The other day I posted about the query log file generated by the Oracle BI Server, and how you could use it to add diagnostics to your query environment. Scott Powell added a comment to the posting suggesting that I take a look at the Usage Tracking feature within OBIEE, as this logs query statistics to a database table and allows you to run reports against them. I’d made a mental note of usage tracking but not used it in anger, and as I heard at Open World that the usage tracking feature had been improved in 10.1.3.4, I thought I’d take a look and see how it now worked.
Usage Tracking isn’t something that’s installed by default with OBIEE, you have to run a script to create a tracking table called S_NQ_ACCT in your database and then register the table in the repository and the NQSCONFIG.INI file. You can log statistics to a file instead, but direct insertion into a table is the recommended approach as this allows better analysis and you can also batch inserts up to reduce the load on the server. Something that I think is new in 10.1.3.4 is a subdirectory under the $ORACLEBI/server/sample directory called “usagetracking”, this contains a number of extra table creation scripts, a repository RPD file that you can import and some web catalog files to provide instant reports over your usage tracking data, something that I think first shipped with the BI Applications and now ships with OBIEE from this release onwards.
Usage1-1
I don’t know if it was just me, but it actually took me quite a lot of time to get Usage Tracking up and running though. FIrst off, there are some extra tables to create, including a view (S_NQ_LOGIN_GROUP) that is required for the various reports but only seems to ship with an SQL Server script – in fact the various repository files and scripts look like they were originally developed for SQL Server, which is the database I often see some of the original nQuire product managers demoing OBIEE with, I guess historically this is the database many of the demos were developed for and that’s reflected itself in the Usage Tracking samples that ship with 10.1.3.4. Once you’ve got the tables developed you need to add a couple of entries to the NQSCONFIG.INI file to turn on usage tracking, tell the BI Server to write the log entries to the database and to specify which database tables to insert to.
Usage1A-2
The key entries are:
[ USAGE_TRACKING ]

ENABLE = YES;

DIRECT_INSERT = YES;

PHYSICAL_TABLE_NAME = <name of physical S_NQ_ACCT table, as located in the physical model>

CONNECTION_POOL = <name of connection pool for log writes, should be separate to other connection pools>

BUFFER_SIZE = 10 MB;

BUFFER_TIME_LIMIT_SECONDS = 5;
The repository also took a while to import properly; it makes use of old-style time series functionality and I had to recreate all the derived measures (change since last year, percentage change since last year and so on) using the new “Time Dimension”-based time series functionality that comes with 10.1.3.4. In the end though I managed to get it all up and running, with a new physical model, business model and subject area within my repository so that I could now start recording and analyzing usage tracking information
Usage2-3
Once you get the repository up and running, and the web catalog imported, you can take a look at the pre-defined reports. Here’s one that lists out the top 10 longest running queries.
Usage3-2
You can also of course create your own reports, and add them to a Usage Tracking dashboard, which you can use to monitor the health of your system, highlight problem reports, see if response time is deteriorating over time and so on.
So how does this compare to the log file analysis that I blogged about the other day? Well, putting my Oracle database hat on, analyzing queries via the log file is more like performing an SQL trace, whereas the usage tracking feature is more like using Statspack, or an automated session logging tool such as AWR or ADDM. The NQSQUERY.LOG file gives you more information than usage tracking, including the actual physical SQL or MDX sent to the underlying databas, details of how federated queries are broken down into individual database calls, plus you get the logical execution plan for a query together with individual row counts at the various stages of the query. Like SQL tracing though, you need to have turned logging (or tracing) on before you can analyze a query, and as Oracle’s recommendation is only to turn logging on for diagnostic purposes, it’s likely that it won’t be on when a user calls you up and tells you that the system is slow. Usage Tracking, in contrast, like Statspack or AWR/ASH is likely to be on at all times, and whilst it won’t tell you the details of how the query executed, you can use to it generate general trend-type reports in order to spot when a problem is developing based on deteriorating performance. The Query log appeals to me as I like to look at the internals, but for most people the extra information available beyond logging level 3 (the logical execution plan and the intermediate row counts) won’t be relevant and they’re mostly interested in whether query performance has spiked recently.
So, from a diagnostics and problem highlighting point of view, my recommendation would be to have Usage Tracking turned on and logging query performance as a background task, potentially using an iBot to send an alert if key queries start to come back with below-average response times. Then, if you’re looking to investigate a particular query, or perhaps if you’re interested in how the BI Server handles a particularly complex federated or other query, the query log is a useful equivalent of the Oracle database SQL trace file and gives you the ability to see how the BI Server goes about satisfying a user query