Showing posts with label Connection Pool. Show all posts
Showing posts with label Connection Pool. Show all posts

Oracle BI EE 10.1.3.4.1 – Setting up Disconnected Analytics

Recently in the UKOUG, we exhibited a demo (more details in the coming blog entries) wherein we also showed the capabilities of the new iphone app, BI Indicators. This actually triggered a chain of thoughts for me on the pros and cons of 3 commonly used BI EE remote access techniques(or content delivery methods)
1. iPhone app accessing the BI Server live
2. Pushing reports/dashboards through iBots to remote mobile devices
3. Using disconnected analytics
The first 2 are quite commonly used and pretty straightforward to setup as well. But the least used (only certain verticals like pharma etc seem to use this) component of BI EE amongst customers is the Disconnected Analytics. That too with everyone moving towards the cloud (we have internet even in the planes these days), the practical use of Disconnected analytics is actually reducing. Having said that, it is probably one of the components that has a lot of moving parts and reasonably a sound architecture. Today i will be covering the process of setting up disconnected analytics.
The high level architecture of Disconnected analytics is given below.
Picture 3
The basic premise behind disconnected analytics is the fact that none of the client machines will have access to the network(only sporadically for syncing). So, in order to enable access to BI EE dashboards/reports in a disconnected mode, every client machine will have a local install of a simplified version of BI EE called as disconnected analytics. This simplified version works exactly the same as the actual BI EE. The biggest difference is in enabling data access. Along with the simplified version of BI EE, a small database called Sybase SQLAnywhere gets installed as a component of disconnected analytics. So, in order to enable local access of data through disconnected analytics, small portions of the live data are copied on to the SQL Anywhere database through a process called as synchronization. This can be easily understood by looking at the screenshot below
Picture 6
Though the architecture is pretty straightforward, there are quite a few number of steps that we need to go through to get this to work. I will start with a very simple model shown below
Picture 7
We basically have 3 tables – Products, Promotions and Sales in an Oracle database. We also have a repository designed against these 3 tables and a set of reports defined against this repository. All of these reside on Machine A which will be our server machine. The first step in the whole setup process is to define what will be needed by the client machine when its not on the network. The lowest possible unit that can be synchronized is a single subject area (variations are possible, but if we can keep it at subject area level, it will become easier to handle the synchronization). So, we start with defining an Application SH which will correspond to all the source tables, subset of the RPD contributing to this subject area and the set of reports created out of this subject area. Following are the steps that we need to do on the Server Machine A.
Server Machine:
Once the application is identified, we start with creating this application as a folder in the {OracleBIData}\disconnected folder. Every application will consist of the following
1. A sub-folder of the name app that will store the RPD to be synchronized, Archived Shared folder of the webcatalog to be synchronized and the set of SQL DDL statements which will be used to create the source tables in the SQLAnywhere database.
2. A sub-folder of the name data that will be used by the server machine to extract data out of certain reports (which we will call sourcing reports) to populate our SQLAnywhere tables. This data will be a sub-set of the data that will be there in the actual source.
3. A configuration XML file of the same name as the application folder, which will contain a set of tags defining how the synchronization for the application will happen.
Picture 9Picture 8
Once the structure is created, we start with creating the DDL SQL statements for creating the source tables in the SQLAnywhere database. Remember, this is the SQL that will be used during synchronization. There are 2 kinds of data synchronization. Incremental and Overwrite. I will just be covering the latter which will overwrite the source tables every time during the synchronization. We need a SQL file for every table that needs to be created in the SQLAnywhere database on each client. Since we have just 3 tables we will create 3 SQL files containing the following DDL
DROP TABLE PRODUCTS;

CREATE TABLE PRODUCTS(
PROD_CATEGORY CHAR(50),
PROD_CATEGORY_ID INTEGER,
PROD_ID INTEGER,
PROD_NAME CHAR(50),
PROD_TOTAL CHAR(13),
PROD_TOTAL_ID INTEGER);
DROP TABLE PROMOTIONS;

CREATE TABLE PROMOTIONS(
PROMO_CATEGORY CHAR(30),
PROMO_CATEGORY_ID INTEGER,
PROMO_ID INTEGER,
PROMO_NAME CHAR(50),
PROMO_TOTAL CHAR(15),
PROMO_TOTAL_ID INTEGER);
DROP TABLE SALES;

CREATE TABLE SALES(
AMOUNT_SOLD INTEGER,
PROD_ID INTEGER,
PROMO_ID INTEGER,
QUANTITY_SOLD INTEGER);
Place the SQL files in the app folder. They can be given any name as these will be referred in the Application configuration xml file.
Once we have the SQL files, we need to create a subset of the RPD which will be used in each client machine. In our case, since we have only one subject area in our application, we will create an RPD with just that subject area alone. Remember this is the RPD that will be used by each client. And each client will only have SQLAnywhere database. So, ensure that you are changing the connection pool of the data source to point to the SQLAnywhere database
Picture 10
Picture 11
Copy this RPD over to the app folder.
Once the RPD is copied as well, the next step is to copy the reports that we need to move over to the client machines. The best option is to archive the entire shared folder and then copy it onto app folder. The archive can be done either from the BI EE UI or through the catalog manager. In effect, the app folder should have the SQL DDL files, RPD and the web catalog in an archived format as shown below
Picture 12
Now that we have our app folder has been setup completely for the synchronization, we need to create a set of reports that will populate our tables in the SQLAnywhere database. The idea is, whenever client machines try to synchronize the applications, BI EE will first create the tables and then will run these sourcing reports to extract the data in a CSV or compressed DAT format. It will then import these back into the SQLAnywhere database. The sourcing reports should include all the columns chosen in the SQL DDL and the order of the columns should match as well. In our case we create one source report for each table.
Picture 13
These reports can contain filters that is relevant to each of your reports. This is the part that can be tricky. In most cases, we do not expose ID columns in the presentation layer. But for DA to work, you need to ensure that you have a report that will populate the IDs as well. Also, it will get a lot tricky when you have federation, fragmentation setup in your repository.
Picture 14
Once we have these reports created, the next step is to create an application config XML file. The XML file that i used is given below
<remotecfg>
<application name="SH" displayname="SH" dir="app">
 <repository  name="Disconnected.rpd"/>
 <webcatalog  name="Disconnected.catalog"/>
 <displayname lang="en" value="SH"/>
</application>
<data dir="data" catalogfolder="/shared/Disconnected/SH">
 <dataset name ="SH" incremental="true" syncmode="online" subjectarea="SH">
 <displayname lang="en" value="SH"/>
 <table name="Products">
  <sourcingreport name="Products" file="Products.csv"/>
  <tablesql name="Products.sql"/>
 </table>
 <table name="Promotions">
  <sourcingreport name="Promotions" file="Promotions.csv"/>
  <tablesql name="Promotions.sql"/>
 </table>
 <table name="Sales">
  <sourcingreport name="Sales" file="Sales.csv"/>
  <tablesql name="Sales.sql"/>
 </table>
 </dataset>
</data>
</remotecfg>
This XML file is nothing but a set of instructions on how to parse the application during synchronization.
Client Machines:
The above setup instructions were for the BI EE Server machine. On the client machines, the first thing that we would have to do is to install the Disconnected analytics client (an option during BI EE install)
Picture 2
Once the install is done we need to first connect the machine on to the network for synchronization. Then navigate to the Server machine BI EE URL and click on Disconnected Analytics. This is what took me a while to figure out. I was of the impression that there would be some setting in the Disconnection Application manager to change the Server machine URL. Unfortunately, it does not work that way. What we need to do is to open the Server URL from the client machine and navigate to the Disconnected Analytics link.
Picture 16
This will show the list of all the applications that have been setup on the Server machine. If we click on Update Data, it will automatically open up a Save/open window asking us to save/open a DAD file.
Picture 17
This DAD file basically provides the necessary configuration settings for the Disconnected application manager to work. Just click on open, it will automatically open up the Application Manager
Picture 18
From here on, the synchronization will work out of the box.
Picture 21
As you see, the synchronization process will first fire all the Logical SQL corresponding to our sourcing reports on the server machine. The results of these reports are then stored as compressed CSVs and are then moved over to the client machine. You can see the CSVs even after the synchronization
Picture 22
This can potentially raise security concerns as the data is stored in a csv format. But in this case, since the logical SQLs are fired as the person who is logging in, it will honor all the rules that we had setup in the repository and the database (if VPD is used). Also the synchronization will automatically update the NQSConfig.ini to new the repository etc.
Picture 23
To me there are quite a few moving parts. But one good thing about this there are quite a few things that we could do like incremental updates etc which could potentially make this very robust. Still i could not comprehend someone installing 600MB worth of software on a client machine and also go through the hassle of setting all these up for getting the same functionality as the BI Server (atleast not now considering the amount of connectivity that we have these days). One other thing that i could not understand is the reason behind Oracle sticking with SQL Anywhere. There are quite a few other databases like TimesTen, Berkeley DB, Sleepy Cat that Oracle owns which could potentially be a replacement for this (probably because these databases are not supported by BI EE).

More on Connection Pools…

We have discussed the basics about the connection pool in one of the previous post. Connection Pool is a very important repository object and we should have a good knowledge of that. Moreover, many interview questions are based out of this. Now, lets understand something more about it…
Below is the image of the connection pool
We have 3 tabs:
1)General       2)XML     3)Write Back
In the General tab,
We need to give the Name of the connection pool.
Call Interface specifies that the Siebel Analytics will be making use of this application program interface(API)  to access the Datasource. We have a list of call interfaces available based on the type of DB we are using as the data source. Like for Oracle we  have OCI, ODBC etc.
 Maximum Connections tells the maximum number of users that can connect through this connection pool. The default value is 10.
Require fully qualified table names is used to have the tables names with complete details like DB.schema.table_name. When this option is selected, all requests sent from the connection pool use fully qualified names to query the underlying database.
In Data Source Name  we need to give the Host String of the DB to which the queries will be routed. We need to give a valid logon info for the data source else will fail to connect to the particular Data source.
If Shared Logon option is checked , then all connections to the database that use the connection pool will use the user name and password specified in the connection pool. If this option is not selected, connections through the connection pool use the database user ID and password specified in the DSN or in the Siebel user profile.
In Username & Password and password u need to specify the Username and password of the underlying DB. We can also create a variable for the Username and make changes in that when we intend to change the underlying DB.
The Enable Connection Pooling allows a single DB connection to remain open for specified time so that the future query requests could route through the same connection. It saves the overhead of opening and closing new connections and if this option is unchecked then we for each query the DB need to open a new connection.
Timeout specifies the time for which the connection to data source remains open after the request completes.  During this time the new requests make use of this connection instead of opening a new connection. If timeout value set to 0 then connection pooling is disabled.
Execute queries Asynchronously specifies that whether the data source supports asynchronous queries or not.
Execute on connect allows the Siebel Analytics Admin server to specify a command to the DB each time the connection to the DB is established. It can be any command accepted by the DB.
Parameters Supported when checked tells that that all the DB parameters mentioned in the DB features file are supported by Siebel Analytics server.
Isolation Level controls the transaction locking behaviour for all statements issued by a connection.
  • Committed Read- Locks are held while the data is read to avoid dirty reads. Data can be changed before the transaction ends with that connection.
  • Dirty Read- 0 Locking. Can read uncommitted or dirty data,  change values in data during  read process in a transaction. Least restrictive of all types.
  • Repeatable Read- Places locks on all data used in a query so that nobody can update the data. However new rows can be inserted by other users but will be available in later reads in the current transaction.
  • Serialization- Places a range lock on data set preventing other users to insert or update the rows in data set until the transaction is complete. Most restrictive of all.

Connection Pool in Admin Tool….

I m back again after some gap… Now, lets understand whts connection pool..
The connection pool is an object in the Physical layer that describes access to the data source. It contains information about the connection between the Siebel Analytics Server and that data source. 
The Physical layer in the Administration Tool contains at least one connection pool for each database. When you create the physical layer by importing a schema for a data source, the connection pool is created automatically. You can configure multiple connection pools for a database. Connection pools allow multiple concurrent data source requests (queries) to share a single database connection, reducing the overhead of connecting to a database.
For each connection pool, you must specify the maximum number of concurrent connections allowed. After this limit is reached, the Analytics Server routes all other connection requests to another connection pool or, if no other connection pools exist, the connection request waits until a connection becomes available.
A word of caution : Increasing the allowed number of concurrent connections can potentially increase the load on the underlying database accessed by the connection pool.
In addition to the potential load and costs associated with the database resources, the Siebel Analytics Server allocates shared memory for each connection upon server startup. This raises the number of connections and increases Siebel Analytics Server memory usage.
I would like users to share their comments also…

Connection Pool,

Setting Up Connection Pools


The connection pool is an object in the Physical layer that describes access to the data source. It contains information about the connection between the Oracle BI Server and that data source.
The Physical layer in the Administration Tool contains at least one connection pool for each database. When you create the physical layer by importing a schema for a data source, the connection pool is created automatically. You can configure multiple connection pools for a database. Connection pools allow multiple concurrent data source requests (queries) to share a single database connection, reducing the overhead of connecting to a database.
NOTE:  It is recommended that you create a dedicated connection pool for initialization blocks. For additional information, refer to Creating or Changing Connection Pools.
For each connection pool, you must specify the maximum number of concurrent connections allowed. After this limit is reached, the Oracle BI Server routes all other connection requests to another connection pool or, if no other connection pools exist, the connection request waits until a connection becomes available.
Increasing the allowed number of concurrent connections can potentially increase the load on the underlying database accessed by the connection pool. Test and consult with your DBA to make sure the data source can handle the number of connections specified in the connection pool. Also, if the data sources have a charge back system based on the number of connections, you might want to limit the number of concurrent connections to keep the charge-back costs down.
In addition to the potential load and costs associated with the database resources, the Oracle BI Server allocates shared memory for each connection upon server startup. This raises the number of connections and increases Oracle BI Server memory usage.
Table 7 contains a brief description of preconfigured connection pools.
Table 7. Preconfigured Connection Pools
Connection Name
Description
BBB Data Warehouse
Enterprise visibility to the Oracle Business Analytics Warehouse.
Oracle's Siebel operational applications (Oracle BI customers only).
BBB OLTP
Enterprise visibility to Oracle's Siebel transactional database.
Oracle's Siebel operational applications (Oracle BI customers only).
BBB XLS Data
DSN is the same as the connection pool name. Data is stored in C:\Data\Applications\BBB\BBB.
Oracle's Siebel operational applications (Oracle BI customers only).
ERM OLTP
Oracle's Siebel Workforce Analytics connection to Oracle's Siebel transactional database.
(Workforce Oracle BI customers only.)
Externalized Metadata Strings
Connection to Oracle's Siebel operational application database to load the translations of Metadata Strings.
Financial Services customers and all customers deploying Oracle BI in a language other than English.
NOTE:  This connection pool is configured to be the same as the Oracle's Siebel transactional database.
Forecasting Oracle Business Analytics Warehouse
Connection to the Oracle Business Analytics Warehouse.
(Forecasting Oracle BI customers only.)
Forecasting Siebel OLTP
Connection to Oracle's Siebel transactional database.
(Real-time Forecasting Oracle BI customers only.)
Incentive Compensation Siebel OLTP
Database connection to Oracle's Siebel transactional database.
(Incentive Compensation Oracle BI customers only.)
Pharma Data Warehouse
Connection to the Pharmaceutical data warehouse.
(Pharmaceutical industry-specific customers only.)
Real-time OLTP
Connection to Oracle's Siebel transactional database.
Real-time Oracle BI (all customers).
SIA Data Warehouse
Connection for Oracle's Siebel Industry Applications data warehouse.
(For Oracle's Siebel Industry Applications customers only.)
Oracle BI Usage
Usage Tracking Writer Connection Pool connects to the database where you store the usage statistics of the Oracle BI Server. (Optional for all customers.)
NOTE:  Using this requires that the Oracle BI Scheduler Server be set up to load the usage statistics into the database.
Oracle Business Intelligence Warehouse
Database connection to the Oracle Business Analytics Warehouse. (Oracle BI applications customers only.)
Siebel OLTP
NOTE:  There are two connection pools to Oracle's Siebel transactional database. Both should be configured properly.
OLTP DbAuth connects to Oracle's Siebel transactional database for Authentication and Authorization. The user name and password is preconfigured to USER and PASSWORD and should be left as such if you want to use database logons to authenticate users.
(All customers.)
UQ Siebel OLTP
Connection to Oracle's Siebel transactional database. Oracle's Siebel Universal (Queuing customers only.)
Usage Accelerator Datawarehouse
Database connection to the Oracle Business Analytics Warehouse. (Oracle BI applications customers only.)
This section includes the following topics:

Creating or Changing Connection Pools


You must create a database object before you create a connection pool. Typically, the database object and connection pool are created automatically when you import the physical schema. You create or change a connection pool in the Physical layer of the Administration Tool.
CAUTION:  It is strongly recommend that customers use OCI for connecting to Oracle. ODBC should only be used to import from Oracle.
This section contains the following topics:

About Connection Pools for Initialization Blocks

It is recommended that you create a dedicated connection pool for initialization blocks. This connection pool should not be used for queries.
Additionally, it is recommended that you isolate the connections pools for different types of initialization blocks. This also makes sure that authentication and login-specific initialization blocks do not slow down the login process. The following types should have separate connection pools:
  • All Authentication and login-specific initialization blocks such as language, externalized strings, and group assignments.
  • All initialization blocks that set session variables.
  • All initialization blocks that set repository variables. These initialization blocks should always be run using the system Administrator user login.
    Be aware of the number of these initialization blocks, their scheduled refresh rate, and when they are scheduled to run. Typically, it would take an extreme case for this scenario to affect resources. For example, refresh rates set in minutes, greater than 15 initialization blocks that refresh concurrently, and a situation in which either of these scenarios could occur during prime user access time frames. To avoid straining the available resources, you might want to disable query logging for the default Oracle BI Administrator.
Initialization blocks should be designed so that the maximum number of Oracle BI Server variables may be assigned by each block. For example, if you have five variables, it is more efficient and less resource intensive to construct a single initialization block containing all five variables. When using one initialization block, the values will be resolved with one call to the back end tables using the initialization string. Constructing five initialization blocks, one for each variable, would result in five calls to the back end tables for assignment.

Setting Up General Properties For Connection Pools

Use this section to complete the General tab.
To set up general properties for connection pools
  1. In the Physical layer of the Administration Tool, right-click a database and choose New Object > Connection Pool, or double-click an existing connection pool.
    The following is an illustration of the General tab in the Connection Pool dialog box.

    Click for full size image
  2. In the Connection Pool dialog box, click the General tab, and then complete the fields using information in Table 8.
    Properties that are specific to a multidimensional data sources can be found in Table 9.
Table 8. Connection Pool General Properties
Field or Button
Description
Call interface
The application program interface (API) with which to access the data source. Some databases may be accessed using native APIs, some use ODBC, and some work both ways. If the call interface is XML, the XML tab is available but options that do not apply to XML data sources are not available.
Data source name
The drop-down list shows the User and System DSNs configured on your system. A data source name that is configured to access the database to which you want to connect. The data source name needs to contain valid logon information for a data source. If the information is invalid, the database logon specified in the DSN will fail.
Enable connection pooling
Allows a single database connection to remain open for the specified time for use by future query requests. Connection pooling saves the overhead of opening and closing a new connection for every query. If you do not select this option, each query sent to the database opens a new connection.
Execute on Connect
Allows the Oracle BI Administrator to specify a command to be executed each time a connection is made to the database. The command may be any command accepted by the database. For example, it could be used to turn on quoted identifiers. In a mainframe environment, it could be used to set the secondary authorization ID when connecting to DB2 to force a security exit to a mainframe security package such as RACF. This allows mainframe environments to maintain security in one central location.
Execute queries asynchronously
Indicates whether the data source supports asynchronous queries.
Isolation level
For ODBC and DB2 gateways, the value sets the transaction isolation level on each connection to the back-end database. The isolation level setting controls the default transaction locking behavior for all statements issued by a connection. Only one of the options can be set at a time. It remains set for that connection until it is explicitly changed.
The following is a list of the options:
Committed Read. Specifies that shared locks are held while the data is read to avoid dirty reads. However, the data can be changed before the end of the transaction, resulting in non repeatable reads or phantom data.
Dirty Read. Implements dirty read (isolation level 0 locking). When this option is set, it is possible to read uncommitted or dirty data, change values in the data, and have rows appear or disappear in the data set before the end of the transaction. This is the least restrictive of the isolation levels.
Repeatable Read. Places locks on all data that is used in a query, preventing other users from updating the data. However, new phantom rows can be inserted into the data set by another user and are included in later reads in the current transaction.
Serializable. Places a range lock on the data set, preventing other users from updating or inserting rows into the data set until the transaction is complete. This is the most restrictive of the four isolation levels. Because concurrency is lower, use this option only if necessary.
Maximum connections
The maximum number of connections allowed for this connection pool. The default is 10. This value should be determined by the database make and model and the configuration of the hardware box on which the database runs as well as the number of concurrent users who require access.
NOTE:  For deployments with Intelligence Dashboard pages, consider estimating this value at 10% to 20% of the number of simultaneous users multiplied by the number of requests on a dashboard. This number may be adjusted based on usage. The total number of all connections in the repository should be less than 800. To estimate the maximum connections needed for a connection pool dedicated to an initialization block, you might use the number of users concurrently logged on during initialization block execution.
Name
The name for the connection pool. If you do not type a name, the Administration Tool generates a name. For multidimensional and XML data sources, this is prefilled.
Parameters supported
If the database features table supports parameters and the connection pool check box property for parameter support is unchecked, special code executes that allows the Oracle BI Server to push filters (or calculations) with parameters to the database. The Oracle BI Server does this by simulating parameter support within the gateway/adapter layer by sending extra SQLPrepare calls to the database.
Permissions
Assigns permissions for individual users or groups to access the connection pool. You can also set up a privileged group of users to have its own connection pool.
Require fully qualified table names
Select this check box, if the database requires it.
When this option is selected, all requests sent from the connection pool use fully qualified names to query the underlying database. The fully qualified names are based on the physical object names in the repository. If you are querying the same tables from which the physical layer metadata was imported, you can safely check the option. If you have migrated your repository from one physical database to another physical database that has different database and schema names, the fully qualified names would be invalid in the newly migrated database. In this case, if you do not select this option, the queries will succeed against the new database objects.
For some data sources, fully qualified names might be safer because they guarantee that the queries are directed to the desired tables in the desired database. For example, if the RDBMS supports a master database concept, a query against a table named foo first looks for that table in the master database, and then looks for it in the specified database. If the table named foo exists in the master database, that table is queried, not the table named foo in the specified database.
Shared logon
Select the Shared logon check box if you want all users whose queries use the connection pool to access the underlying database using the same user name and password.
If this option is selected, then all connections to the database that use the connection pool will use the user name and password specified in the connection pool, even if the user has specified a database user name and password in the DSN (or in user configuration).
If this option is not selected, connections through the connection pool use the database user ID and password specified in the DSN or in the user profile.
Timeout (Minutes)
Specifies the amount of time, in minutes, that a connection to the data source will remain open after a request completes. During this time, new requests use this connection rather than open a new one (up to the number specified for the maximum connections). The time is reset after each completed connection request.
If you set the timeout to 0, connection pooling is disabled; that is, each connection to the data source terminates immediately when the request completes. Any new connections either use another connection pool or open a new connection.
Use Multithreaded Connections
When the check box is select ed, Oracle BI Server terminates idle physical queries (threads). When not selected, one thread is tied to one database connection (number of threads = maximum connections). Even if threads are idle, they consume memory.
The parameter DB_GATEWAY_THREAD_RANGE in the Server section of NQSConfig.ini establishes when Oracle BI Server terminates idle threads. The lower number in the range is the number of threads that are kept open before Oracle BI Server takes action. If the number of open threads exceeds the low point in the range, Oracle BI Server terminates idle threads. For example, if DB_GATEWAY_THREAD_RANGE is set to 40-200 and 75 threads are open, Oracle BI Server terminates any idle threads.

Setting Up Additional Connection Pool Properties for an XML Data Source


Use the XML tab of the Connection Pool dialog box to set additional properties for an XML data source.
CAUTION:  The XML tab of the Connection Pool dialog box provides the same functionality as the XML tab of the Physical Table dialog box. However, when you set the properties in the XML tab of the Physical Table dialog box you will override the corresponding settings in the Connection Pool dialog box.
To set up connection pool properties for an XML data source
  1. Right-click the XML database, select New Object > Connection Pool.
  2. In the Connection Pool dialog box, click the XML tab.
  3. Complete the fields, using Table 10 as a guide.
Table 10. XML Connection Pool Properties
Property
Description
Connection method
Search script
Used for XML Server data source.
Connection properties
Maximum connections
Default is 10.
Connection properties
URL loading time out
Used for XML data source. Time-out interval for queries. The default is 15 minutes.
If you specified a URL to access the data source, set the URL loading time-out as follows:
  • Select a value from the drop-down list (Infinite, Days, Hours, Minutes or Seconds).
  • Specify a whole number as the numeric portion of the interval.
Connection properties
URL refresh interval
Used for XML data source. The refresh interval is analogous to setting cache persistence for database tables. The URL refresh interval is the time interval after which the XML data source will be queried again directly rather than using results in cache. The default setting is infinite, meaning the XML data source will never be refreshed.
If you specified a URL to access the data source, set the URL refresh interval.
  • Select a value from the drop-down list (Infinite, Days, Hours, Minutes or Seconds).
  • Specify a whole number as the numeric portion of the interval.
Query input supplements
Header file/Trailer file
Used for XML Server data source.
Query output format
Choose only XML for an XML data source.
Other choices are available for an XML Server data source.
XPath expression
An XPath expression is a simple XSLT transformation rule that fits into one line. It is not essential, given the support of XSLT, but it is supported for convenience. A sample entry might be */BOOK[not(PRICE>'200')].
  • For an XML Server data source, you cannot specify an XPath expression on the XML tab of the Physical Table object.
XSLT file
An XSLT file contains formatting rules written according to the XSLT standard. It defines how an XML file may be transformed. The current implementation of the XML gateway does not support all XML files, only those that resemble a tabular form, with repeating second level elements. To increase the versatility of the XML gateway, customers can specify an XSLT file to preprocess an XML file to a form that the Oracle BI Server supports. Specifying the XSLT file in the connection pool applies it to all the XML physical tables in the connection pool.
  • For an XML data source, you can specify an XSLT file on a per-table basis on the XML tab of the Physical Table object. This overrides what you specified in the connection pool.
  • For an XML Server data source, you cannot specify an XSLT file on the XML tab of the Physical Table object.
To specify query output format settings
  1. (Optional) For an XSLT file, type the path to and name of the XSLT file in the XSLT File field, or use the Browse button.
  2. (Optional) For an XPath expression, type the XPath expression in the XPath Expression field, for example, //XML, or use the Browse button.

Setting Up Write-Back Properties




Use this section to complete the Write Back tab in the Connection Pool dialog box.
To set up write-back properties for connection pools
  1. In the Physical layer of the Administration Tool, right-click a database and select New Object > Connection Pool, or double-click an existing connection pool.
  2. In the Connection Pool dialog box, click the Write Back tab.
  3. In the Write Back tab, complete the fields using Table 11 as a guide.
Table 11. Field Descriptions for Write Back Tab
Field
Description
Bulk Insert
Buffer Size (KB)
Used for limiting the number of bytes each time data is inserted in a database table.
Bulk Insert
Transaction Boundary
Controls the batch size for an insert in a database table.
Temporary table
Database Name
Database where the temporary table will be created. This property applies only to IBM OS/390 because IBM OS/390 requires database name qualifier to be part of the CREATE TABLE statement. If left blank, OS/390 will default the target database to a system database for which the users may not have Create Table privileges.
Temporary table
Owner
Table owner name used to qualify a temporary table name in a SQL statement, for example to create the table owner.tablename. If left blank, the user name specified in the writeable connection pool is used to qualify the table name and the Shared Logon field on the General tab should also be set.
Temporary table
Prefix
When the Oracle BI Server creates a temporary table, these are the first two characters in the temporary table name. The default value is TT.
Temporary table
Tablespace Name
Tablespace where the temporary table will be created. This property applies to OS/390 only as OS/390 requires tablespace name qualifier to be part of the CREATE TABLE statement. If left blank, OS/390 will default the target database to a system database for which the users may not have Create Table privileges.
Unicode Database Type
Select this check box when working with columns of an explicit Unicode data type, such as NCHAR, in an Unicode database. This makes sure that the binding is correct and data will be inserted correctly. Different database vendors provide different character data types and different levels of Unicode support. Use the following general guidelines to determine when to set this check box:
  • On a database where CHAR data type supports Unicode and there is no separate NCHAR data type, do not select this check box.
  • On a database where NCHAR data type is available, it is recommended to select this check box.
  • On a database where CHAR and NCHAR data type are configured to support Unicode, selecting this check box is optional.
NOTE:  Unicode and non-Unicode datatypes cannot coexist in a single non-Unicode database. For example, mixing the CHAR and NCHAR data types in a single non-Unicode database environment is not supported.