Thursday, November 26, 2009

ISAPI Forwarding

At a recent client we were using two web servers, both running IIS and the OBIEE presentation services, over the top of this a load balancer routed connections to one of the two servers based on load.  We used the Oracle provided replication services to make sure that the contents of the two web catalogs running on each presentation server were kept in synch.

However we noticed over time that the replication services weren't working 100% and that some items were not present in both web catalogs.

To get around this we decided instead to use the ISAPI forwarding functionality; this allows the IIS server on one web server to forward it's presentation service connections to another server.  This means that we can have two servers running IIS and accepting incoming connections, but only one running the presentation services, so only one web catalog is required and no replication has to take place.

In versions before the name change to OBIEE (i.e. Siebel Analytics 7.8 etc) this was handled via a registry key entry as below:

Path:

HKEY_LOCAL_MACHINE\Software\Siebel Systems, Inc.\Siebel Analytics\Web\7.8\ISAPI\

String value:

Name = ServerConnectString
Value = sawtcp://[server name]:9710

In OBIEE this has been replaced with a setting in the configuration file:

[Oracle BI Directory]\web\config\isapiconfig.xml

Like this:

<?xml version="1.0" encoding="utf-8"?>
<WebConfig>
   <ServerInstance>
       <ServerConnectInfo address="localhost" port="9710"/>
   </ServerInstance>
</WebConfig>

Just change localhost to the name of the server running the presentation services that you want the IIS connections to be forwarded to.

Wednesday, October 28, 2009

RPD Groups and Siebel Responsibilities

This post explains how responsibilities map to RPD groups in OBIA/Analytics Apps with Siebel as a source of the apps.

First of all, you must either create responsibilities in Siebel with exactly the same name as groups in RPD, or vice versa.  The important point is that there must be groups in the RPD and responsibilities in Siebel with exactly the same name:


 Groups in RPD

 
Responsibilities in Siebel

When a user logs into OBIEE an initialisation block named "Authorization" runs some SQL against the Siebel database and gets the responsibilities for the current user.  It sets the session variable GROUP to the set of responsibilities returned.



If there are groups in the RPD which have identical names to any of the responsibilities held in the GROUP variable after the initialisation block has run then the user will be added to those groups, and will then inherit all the security (object and data level) for that group.

This example is based on Siebel as a source, but there is no reason why this can't work for any source.  The important point is the session initialisation block which runs and gets the groups to add the user to.  In Siebel this is controlled through responsibilities, in another source system it could be through another mechanism; as long as it is possible to retrieve through SQL then the initialisation block can be changed to use this.

Tuesday, October 20, 2009

Logical Levels

One of the issues that developers regularly run into with OBIEE (especially when they are new to the application) is the setting of levels against logical fact sources.  The most common error which should instantly have you thinking about levels if you see it is:

Logical dimension table has a source that does not join to any fact source

Beyond the obvious check that all sources do actually join to a fact you should then think about levels.  Logical Levels are set against the logical source of fact tables in the BMM layer:



Levels control how the fact table connects to the dimension table and are used to create level based measures and control the use of aggregation tables.  What many people don't realise is that to make OBIEE work correctly you should always set the levels on fact tables, so that OBIEE knows that it can connect the dimension and the fact at the most detailed level.

The general rules for levels are:
  1. If you create a new dimension table, build a dimension hierarchy over it
  2. Make sure the dimension hierarchy has a detail level at the lowest level of granularity for the dimension table (i.e. integration id, ROW_WID etc)
  3. If you are adding a join from this new dimension to any fact table then make sure that you set the level for the connected dimension hierarchy against the correct logical source of the fact (the one that physically joins to the dimension), the default level to set is the lowest level (the detail level).
  4. If you create a new fact then you need to make sure you set the levels as in 3 for all the dimension hierarchies built over dimensions which this fact joins to.
Sometimes you can get away with not setting levels, but more often than not this will come back and bite you in the long run, the better option is to ensure that you always set them.

Monday, September 28, 2009

Row-wise variables

This is a topic I've seen a few times on the OTN forums, and I'm not surprised because the way it has been built is not exactly intuitive.  With that in mind here are some notes on Row-wise intialisation of varaibles in OBIEE, how it works, how to use them and what they are useful for.

So first of all I am going to set up a table for testing purposes:
CREATE TABLE TEST_ROWWISE(
    ROW_WID NUMBER(10) NOT NULL,
    USER_NAME VARCHAR2(100),
    VAR_VALUE VARCHAR2(100)
);

INSERT INTO TEST_ROWWISE VALUES(1,'matt','Value 1');
INSERT INTO TEST_ROWWISE VALUES(2,'matt','Value 2');
INSERT INTO TEST_ROWWISE VALUES(3,'matt','Value 3');
INSERT INTO TEST_ROWWISE VALUES(4,'sue','Value 1');
INSERT INTO TEST_ROWWISE VALUES(5,'sue','Value 2');

commit;

Then in my repository I created a user matt and a user sue for testing purposes:



Next I created a new session intitialisation block:



With a data source of this:



Then in the data target:




The end result of this is that when a user logs in the Oracle BI server will run the SQL in the initialization block against the database and return the rows from the table TEST_ROWWISE for that user (the parameter :user always passes the username of the currently logged in user).  After it has done that the currently logged in user now has a variable called 'VAR' containing all the values for that user.

I tested it by exposing the TEST_ROWWISE table in an rpd and creating a simple request against that table:



Then to test the variable add a filter to the request:



And the detail of the filter:



When logged on as matt I see:


And when logged in as sue:



So here you can see that the logged in user has a session variable VAR which holds all the values returned from the row-wise initialisation.

One way this technique is used in the Oracle Business Intelligence Applications is to go and fetch the web groups that the user should be added to from the source database.  For instance, if Siebel is the source then an intialisation block gets all the responsibilities for the user from the Siebel DB and assigns these to a variable called GROUPS.  In this way any repsonsiblities that the user is associated to, which have exactly the same name as groups existing in the rpd, will be automatically associated to the user at login.

JavaScript help item

One thing that I have found useful at a few different clients is a small item that can be added to a dashboard which helps the user to understand what the dashboard is supposed to achieve, how to use it etc.  This can be accomplished fairly easily with a static text view on a report that just brings back a very easy result.  However, sometimes this will be even better received if you add a bit of style to it.  It is very easy to make an expanding/collapsing item using some simple javascript and css.  Here is a very simple example:

Initial state of item:



After clicking the "Show Help" link:



Clicking the Hide Help link will then collapse the item again, this adds a bit of style to the help item and saves screen real-estate on the dashboard when the help is not needed.

This can be easily created in a static text item following these steps:
  1. Create a very simple report, for example just put the year column from one of the date dimensions in the request and limit it to 2009, something that will return very quickly.  You have to have a query underneath a request, even if you are only exposing a static text view.
  2. Create a static text view, ensure the "Contains HTML Markup" box is ticked otherwise it won't work properly.
  3. Put the HTML in the box, here is the HTML for my simple example:
<html>
    <head>
       <meta http-equiv="content-type" content="text/html;charset=ISO-8859-1">
       <style type="text/css">
          <!--
          div.wrapper {
             text-align:left;
             margin:0 auto;
             width:500px;
             border:2px solid #1358A8;
             background: #9ABFDC;
             padding:10px;
          }
          #myvar{
             border:1px solid #1358A8;
             background:#EAEFF5;
             padding:20px;
             font-family:verdana;
          }
          a.link{
             color:#1358A8;
             cursor:pointer;
             font-family:verdana;
             font-weight:bold;
             text-decoration: underline;
          }
          p.custom{
             font-family: verdana;
             color: #586073;
          }
          -->
       </style>
       <script type="text/javascript">
          <!--
          function switchMenu(obj) {
             var el = document.getElementById(obj);
             var linkVar = document.getElementById("showhidelink");
             if ( el.style.display != "none" ) {
                el.style.display = 'none';
                 linkVar.innerHTML = "Show Help";
             }
             else {
                el.style.display = 'block';
                linkVar.innerHTML = "Hide Help";
             }
          }
          //-->
       </script>
    </head>
    <body>
       <div class="wrapper">
          <p align=center><a title="Show/Hide" onclick="switchMenu('myvar');" class="link" id="showhidelink">Show Help</a></p>
          <div id="myvar" style='display:none;'>
             <p class = "custom">Here is some help text to give details on what this dashboard is showing</p>
             <p class = "custom">And another paragraph with some more information</p>
          </div>
       </div>
    </body>
</html>

As you can imagine using some more complex css means you can make this look even better and make it match any company branding guidelines for your client.

Thursday, September 24, 2009

DAC server and OC4J as Windows services

As I'm sure you are all aware the DAC server runs as a batch command which lives in the user's session, so as soon as that session ends the command dies and the DAC server stops. This is a real pain for a server side app as at many clients the remote desktop sessions for users are automatically disconnected after a period of time. One way to get around this is to create scheduled server tasks to kick off the DAC server process just before the ETL starts. This is far from ideal however.

So, a while ago I came across this posting on MySupport (or Metalink as it was then) (ID 578174.1) which gave a suggestion for using a free utility called JavaService to install OC4J as a service. The utility is available here:


This got me to thinking about whether I could get the DAC server to run as a service, after a lot of playing around I finally got it working properly, you need to create the service using JavaService with this command:

javaservice -install "Oracle BI: DAC Service" "C:\Progra~1\Java\jdk1.5.0_17\jre\bin\client\jvm.dll" -Xms256m -Xmx1024m "-Djava.class.path=D:\OracleBI\DAC\lib\ojdbc14.jar;D:\OracleBI\DAC\DAWSystem.jar;E:\OracleBI\;D:\OracleBI\DAC\lib\ant-1.6.5.jar;D:\OracleBI\DAC\lib\antlr-2.7.6.jar;D:\OracleBI\DAC\lib\ant-antlr-1.6.5.jar;D:\OracleBI\DAC\lib\asm.jar;D:\OracleBI\DAC\lib\asm-attrs.jar;c3p0-0.9.0.jar;D:\OracleBI\DAC\lib\cglib-2.1.3.jar;D:\OracleBI\DAC\lib\checkstyle-all.jar;D:\OracleBI\DAC\lib\cleanimports;D:\OracleBI\DAC\lib\concurrent-1.3.2.jar;D:\OracleBI\DAC\lib\connector.jar;D:\OracleBI\DAC\lib\dom4j-1.6.1.jar;D:\OracleBI\DAC\lib\ehcache-1.2.3.jar;D:\OracleBI\DAC\lib\hibernate3.jar;D:\OracleBI\DAC\lib\jaas.jar;D:\OracleBI\DAC\lib\jacc-1_0-fr.jar;D:\OracleBI\DAC\lib\javassist.jar;D:\OracleBI\DAC\lib\jaxen-1.1-beta-7.jar;D:\OracleBI\DAC\lib\jboss-cache.jar;D:\OracleBI\DAC\lib\jboss-common.jar;D:\OracleBI\DAC\lib\jboss-jmx.jar;D:\OracleBI\DAC\lib\jboss-system.jar;D:\OracleBI\DAC\lib\jdbc2_0-stdext.jar;D:\OracleBI\DAC\lib\jgroups-2.2.8.jar;D:\OracleBI\DAC\lib\jta.jar;D:\OracleBI\DAC\lib\junit-3.8.1.jar;D:\OracleBI\DAC\lib\log4j-1.2.11.jar;D:\OracleBI\DAC\lib\proxool-0.8.3.jar;;D:\OracleBI\DAC\lib\versioncheck.jar;D:\OracleBI\DAC\lib\xerces-2.6.2.jar;D:\OracleBI\DAC\lib\xml-apis.jar;D:\OracleBI\DAC\lib\jsr173_api.jar;D:\OracleBI\DAC\lib\sjsxp.jar" "-Duser.dir=D:\OracleBI\DAC" -start  com.siebel.etl.net.QServer -description "Oracle BI DAC Server Service"

Long I know :-) but the DAC server needs all those jars to work properly.  Obviously you need to change the paths if your OBIEE instance is not installed in D:\OracleBI

This should create a service which you can start and stop as any other Windows service and runs a perfectly functioning DAC server.

To get back to the OC4J document I found on MySupport (ID 578174.1) I tried it and initially it appeared to work but I soon realised that there were problems with the BI Publisher server (which runs in OC4J) which meant that the BI Publisher server could not access the report repository properly.

So I got to work trying to find out why this didn't work and eventually I worked it out, so here are the steps to get an OC4J windows service which allows the BI Publisher to work fully:
  1. Copy the JavaService.exe executable to the directory OracleBI\oc4j_bi\bin you can rename it to something else if you want, I named it OC4JService.exe.  It is vitally important that you copy the executable to this location and use this executable to create the service, otherwise you will have problems with the BI Publisher server not being able to access the BI Publisher repository.
  2. Using this executable create the Windows service.
  3. OC4JJavaservice -install "Oracle BI: OC4J Service" "D:\jre1.5.0_17\bin\client\jvm.dll" -XX:MaxPermSize=128m "-Djava.class.path=D:\OracleBI\oc4j_bi\j2ee\home\oc4j.jar" -start oracle.oc4j.loader.boot.BootStrap -params -out "D:\OracleBI\oc4j_bi\j2ee\home\log\oc4j.log.txt" -err "D:\OracleBI\oc4j_bi\j2ee\home\log\oc4.err.txt" -current "D:\OracleBI\oc4j_bi\bin" -description "Oracle BI Publisher Server"
Obviously the first parameter is a personal choice for the service name.  The second parameter is the location of the jre to use.  Leave the third parameter as is.  The fourth parameter is the location of the oc4j.jar, the main jar for OC4J.  The fifth parameter is the class to start from that jar, leave this as is.  The parameters after -params are optional and are just an output log, an error log and a description for the service.

Once the command in number 2 has finished the Windows service should be created and it should start, stop and operate normally.  The BI Publisher running from this service should work correclty and have full access to the report repository.

OBIEE Security

OBIEE security boils down into 2 different types:
  • Object level security
  • Data level security
Object Level Security
Each object in the RPD can be secured by user group to restrict access. 



 

In this picture everyone has access to this presentation table.  To restrict access you click on the tick to change it to a cross for everyone.  You can then give access only to specific groups by changing the box to a tick for those groups.

Data Level Security
Data level security is also controlled by user group, filters are defined against user groups and these filters then restrict the data returned to each user by manipulating the WHERE clause in the SQL generated by the server.

For instance:



Here you can see the filters that are defined for the user group "Primary Org-Based Security".  If we take one example:

When a user creates a query including the logical table Core."Dim - Opportunity" the OBIEE server will look up the physical column for the logical column Core."Dim - Opportunity".VIS_PR_BU_ID, and add a clause for this column to the WHERE clause of the SQL query generated.  In the case above we are using a session variable called ORGANIZATION, this is generated at login by row-wise initialisation.

So we end up with this on the end of the SQL Query:

WHERE W_OPTY_D.VIS_PR_BU_ID IN ('1-A1233','1-D453G','1-98GT2')

And so the user will only see data from their organizations.  This exact example is based on Analytics Apps 7.9.5 using Siebel as the source OLTP system.  So the ORGANIZATION variable is initialised by getting all the user's orgs from the Siebel DB in a SQL statement; you may need to create something similar for your application yourself.

Wednesday, September 23, 2009

OBIEE Query Caching

This is something I see come up on the OTN forums every now and then so I thought I would put together a post explaining how OBIEE query caching works and how to administer it.

Introduction

OBIEE includes two levels of caching, one level works at the OBIEE server level and caches the actual data for a query returned from the database in a file, the other level is at the OBIEE Presentation Server.  This post will concentrate on the former.

NQSConfig

Caching is enabled and configured in the NQSConfig.ini file located at \server\Config, look for this section:

###############################################################################
#
#  Query Result Cache Section
#
###############################################################################

[ CACHE ]

ENABLE    =    YES;
// A comma separated list of pair(s)
//   e.g. DATA_STORAGE_PATHS = "d:\OracleBIData\nQSCache" 500 MB;
DATA_STORAGE_PATHS    =    "C:\OracleBIData\cache" 500 MB;
MAX_ROWS_PER_CACHE_ENTRY = 100000;  // 0 is unlimited size
MAX_CACHE_ENTRY_SIZE = 1 MB;
MAX_CACHE_ENTRIES = 1000; 

POPULATE_AGGREGATE_ROLLUP_HITS = NO;
USE_ADVANCED_HIT_DETECTION = NO;


The first setting here ENABLE controls whether caching is enabled on the OBIEE server or not.  If this is set to NO then the OBIEE server will not cache any queries, if it is set to YES then query caching will take place. 

DATA_STORAGE_PATHS is a comma delimited list of paths and data limits used for storing the cache files.  This can include multiple locations for storage i.e.  C:\OracleBIData\cache" 500 MB, C:\OracleBIData\cache2" 200 MB.   Note that although the docs say that the maximum value is 4GB the actual max value is 4294967295.

MAX_ROWS_PER_CACHE_ENTRY is the limit for the number of rows in a single query cache entry, this is designed to stop huge queries consuming all the cache space.

MAX_CACHE_ENTRY_SIZE is the maximum physical size for a single entry in the query cache.  If there are queries you want to enter the cache which are not being stored consider increasing this value to capture those results, Oracle suggest not setting this to more than 10% of the max storage for the query cache.

MAX_CACHE_ENTRIES is  the maximum number of cache entries that can be stored in the query cache, when this limit is reached the entries are replaced using a Least Recently Used (LRU) algorithm.

POPULATE_AGGREGATE_ROLLUP_HITS is whether to populate the cache with a new entry when it has aggregated data from a previous cache entry to fulfil a request.  By default this is NO.


USE_ADVANCED_HIT_DETECTION if you set this to YES then an advanced two-pass algorithm will be used to search for cache entries rather than the normal one-pass.  This means you may get more returns from the cache, but the two-pass algorithm will be computationally more expensive.

Managing cache

 The cache can be managed by opening the Administration Tool and connecting online to the OBIEE server.  Go to Manage -> Cache:



This will open a window showing you the cache as it currently is with all entries:




This contains some very interesting information about each entry such as when it was created, who created it, what logical SQL it contains, how mnay rows are in the entry, how many times the entry has been used since it was created etc.


To purge cache entries select them in the right hand window and right click, choose purge from the context menu:



This will remove those cache entries from the query cache and any request which would previously have been fulfilled by this cache entry will instead go to the database (and create a new cache entry).  This feature is essential if you make a data fix to the underlying data warehouse and need to ensure this fix is displayed in OBIEE as cache entries will then be stale (outdated because the underlying data does not match the data in the cache any more).

Cache Persistence Timing

In addition to the global cache settings you can set the cache persistence individually on each physical table:




The first option is to make the table cacheable or not, if the table is cacheable then there are a number of other options; "Cache never expires" which is self-explanatory and "Cache persistence time" which accepts a number and a period (days, hours, minutes, seconds).  This period and number defines how long an entry for that table will be kept in the query cache, after this time period expires the cache entry for this table will be removed.

Programmatically purging the cache

Oracle provide a number of ODBC calls which you can issue to the OBIEE server over the Oracle provided ODBC driver to purge the cache these are:

SAPurgeCacheByQuery(x) - This will purge the cache of an entry that meets the logical SQL passed to the method.

SAPurgeCacheByTable(x,y,z,m) - This will purge the cache for all entries for the specified physical table where:

x = Database name
y = Catalog name
z = Schema name
m = Table name

SAPurgeAllCache() - This will purge all entries from the query cache.

These ODBC calls can be called by creating a .sql file with the call in it, i.e.:

Call SAPurgeAllCache();

Then create a script to call it, on Windows you can create a .cmd file with the contents:

nqcmd -d "AnalyticsWeb" -u Administrator -p [admin password] -s [path to sql file].sql -o [output log file name]

This will use nqcmd to open an ODBC connection to the ODBC AnalyticsWeb and then run your .sql file which will purge the cache.

I have used this many times as the only step in a custom Informatica WF which is then added as a task in the DAC and set as a following task for an Execution Plan.  This way after the ETL the cache will be purged automatically and there is no risk of stale cache entries. 

Event Polling Tables

Another method for purging the cache selectively is to use event polling tables.  These are tables which the OBIEE server polls regularly, when it finds a row in the table is processes it and purges any cache entries for the table referenced in the row.  Your ETL process could be configured to add rows to an event polling table when it has updated a table and thus ensure the query cache is not stale.  An example event polling table:

create table UET (
   UpdateType Integer not null,
   UpdateTime date DEFAULT SYSDATE not null,
   DBName char(40) null,
   CatalogName varchar(40) null,
   SchemaName varchar(40) null,
   TableName varchar(40) not null,
   Other varchar(80) DEFAULT NULL
); 


Once this table has been created you need to import it into the physical layer and then use the Tools -> Utilities -> Oracle BI Event Tables option to select the table as an event table.

Online Modification of RPD 

If you modify the RPD online, then any changes that affect a Business Model will force a purge of the cache for any entries relating to that Business Model.

Well I hope that is useful, it is by no means a complete document on Query Caching, but it should give a good overview and some ideas on managing the cache in a live environment.

DAC import/export

I came across an interesting foible of the DAC today after opening an SR with Oracle.  Apparently using the import function and not selecting Truncate Repository Tables will result in extremely unpredictable results, for instance image suffixes on tables not being imported etc:



The general advice from Oracle seems to be always use the Truncate Repository Tables option, although this means you'll always have to export/import all Applications/SSC which is a pain as the import process is so incredibly slow.

Also, when exporting you will often find an error in the export.log like this:

W_ETL_SA objects: 31
37  INFO  Fri Sep 18 19:02:38 CEST 2009  Exporting entity W_ETL_SA for application Universal
38  INFO  Fri Sep 18 19:02:38 CEST 2009  Moved W_ETL_SA: 62 objects
39  INFO  Fri Sep 18 19:02:38 CEST 2009  Exporting entity W_ETL_SA for application Siebel 8.0 Vertical
40  WARNING  Fri Sep 18 19:02:38 CEST 2009  Mismatch between the number of records in the object reference table and objects

According to Oracle these errors can be safely ignored, personally I find them a bit disconcerting as I am worried that some objects are being missed in the import...