This is something I see come up on the OTN forums every now and then so I thought I would put together a post explaining how OBIEE query caching works and how to administer it.
Introduction
OBIEE includes two levels of caching, one level works at the OBIEE server level and caches the actual data for a query returned from the database in a file, the other level is at the OBIEE Presentation Server. This post will concentrate on the former.
NQSConfig
Caching is enabled and configured in the NQSConfig.ini file located at \server\Config, look for this section:
###############################################################################
#
# Query Result Cache Section
#
###############################################################################
[ CACHE ]
ENABLE = YES;
// A comma separated list of pair(s)
// e.g. DATA_STORAGE_PATHS = "d:\OracleBIData\nQSCache" 500 MB;
DATA_STORAGE_PATHS = "C:\OracleBIData\cache" 500 MB;
MAX_ROWS_PER_CACHE_ENTRY = 100000; // 0 is unlimited size
MAX_CACHE_ENTRY_SIZE = 1 MB;
MAX_CACHE_ENTRIES = 1000;
POPULATE_AGGREGATE_ROLLUP_HITS = NO;
USE_ADVANCED_HIT_DETECTION = NO;
The first setting here ENABLE controls whether caching is enabled on the OBIEE server or not. If this is set to NO then the OBIEE server will not cache any queries, if it is set to YES then query caching will take place.
DATA_STORAGE_PATHS is a comma delimited list of paths and data limits used for storing the cache files. This can include multiple locations for storage i.e. C:\OracleBIData\cache" 500 MB, C:\OracleBIData\cache2" 200 MB. Note that although the docs say that the maximum value is 4GB the actual max value is 4294967295.
MAX_ROWS_PER_CACHE_ENTRY is the limit for the number of rows in a single query cache entry, this is designed to stop huge queries consuming all the cache space.
MAX_CACHE_ENTRY_SIZE is the maximum physical size for a single entry in the query cache. If there are queries you want to enter the cache which are not being stored consider increasing this value to capture those results, Oracle suggest not setting this to more than 10% of the max storage for the query cache.
MAX_CACHE_ENTRIES is the maximum number of cache entries that can be stored in the query cache, when this limit is reached the entries are replaced using a Least Recently Used (LRU) algorithm.
POPULATE_AGGREGATE_ROLLUP_HITS is whether to populate the cache with a new entry when it has aggregated data from a previous cache entry to fulfil a request. By default this is NO.
USE_ADVANCED_HIT_DETECTION if you set this to YES then an advanced two-pass algorithm will be used to search for cache entries rather than the normal one-pass. This means you may get more returns from the cache, but the two-pass algorithm will be computationally more expensive.
Managing cache
The cache can be managed by opening the Administration Tool and connecting online to the OBIEE server. Go to Manage -> Cache:
This will open a window showing you the cache as it currently is with all entries:
This contains some very interesting information about each entry such as when it was created, who created it, what logical SQL it contains, how mnay rows are in the entry, how many times the entry has been used since it was created etc.
To purge cache entries select them in the right hand window and right click, choose purge from the context menu:
This will remove those cache entries from the query cache and any request which would previously have been fulfilled by this cache entry will instead go to the database (and create a new cache entry). This feature is essential if you make a data fix to the underlying data warehouse and need to ensure this fix is displayed in OBIEE as cache entries will then be stale (outdated because the underlying data does not match the data in the cache any more).
Cache Persistence Timing
In addition to the global cache settings you can set the cache persistence individually on each physical table:
The first option is to make the table cacheable or not, if the table is cacheable then there are a number of other options; "Cache never expires" which is self-explanatory and "Cache persistence time" which accepts a number and a period (days, hours, minutes, seconds). This period and number defines how long an entry for that table will be kept in the query cache, after this time period expires the cache entry for this table will be removed.
Programmatically purging the cache
Oracle provide a number of ODBC calls which you can issue to the OBIEE server over the Oracle provided ODBC driver to purge the cache these are:
SAPurgeCacheByQuery(x) - This will purge the cache of an entry that meets the logical SQL passed to the method.
SAPurgeCacheByTable(x,y,z,m) - This will purge the cache for all entries for the specified physical table where:
x = Database name
y = Catalog name
z = Schema name
m = Table name
SAPurgeAllCache() - This will purge all entries from the query cache.
These ODBC calls can be called by creating a .sql file with the call in it, i.e.:
Call SAPurgeAllCache();
Then create a script to call it, on Windows you can create a .cmd file with the contents:
nqcmd -d "AnalyticsWeb" -u Administrator -p [admin password] -s [path to sql file].sql -o [output log file name]
This will use nqcmd to open an ODBC connection to the ODBC AnalyticsWeb and then run your .sql file which will purge the cache.
I have used this many times as the only step in a custom Informatica WF which is then added as a task in the DAC and set as a following task for an Execution Plan. This way after the ETL the cache will be purged automatically and there is no risk of stale cache entries.
Event Polling Tables
Another method for purging the cache selectively is to use event polling tables. These are tables which the OBIEE server polls regularly, when it finds a row in the table is processes it and purges any cache entries for the table referenced in the row. Your ETL process could be configured to add rows to an event polling table when it has updated a table and thus ensure the query cache is not stale. An example event polling table:
create table UET (
UpdateType Integer not null,
UpdateTime date DEFAULT SYSDATE not null,
DBName char(40) null,
CatalogName varchar(40) null,
SchemaName varchar(40) null,
TableName varchar(40) not null,
Other varchar(80) DEFAULT NULL
);
Once this table has been created you need to import it into the physical layer and then use the Tools -> Utilities -> Oracle BI Event Tables option to select the table as an event table.
Online Modification of RPD
If you modify the RPD online, then any changes that affect a Business Model will force a purge of the cache for any entries relating to that Business Model.
Well I hope that is useful, it is by no means a complete document on Query Caching, but it should give a good overview and some ideas on managing the cache in a live environment.