Oracle Essbase Care and Maintenance: optimization & backup

Oracle Essbase optimization and backupIn this blog, we recommend methods for optimizing an Essbase database and backing up data. The recommendations are general in nature, and some may be inappropriate for your particular situation. You should discuss your specific needs with your consultant before implementing any optimizations.


Optimizing Essbase

It has been said that tuning an Essbase database depends as much on the chef as it does on the recipe. The optimization process requires more than blindly following a specific set of steps. You need an understanding of the data and the users to be able to select the most appropriate optimizations for your system.


There are five key areas for optimization:

  • Overall performance (choosing an appropriate data storage model)
  • Query performance
  • Data load performance
  • Calculation performance
  • Cache performance


Overall Performance

The most obvious impact on performance can come from the choice of data storage for your Essbase database. Oracle Essbase has two data storage options: aggregate storage option (ASO) and block storage option (BSO). Implementing a storage model that is inappropriate for the data contained in the database can cause performance issues.

When ASO was added to Essbase in 2004, it dramatically changed the performance and scalability landscape. When one beta customer converted an existing BSO application that took more than ЗУ2 hours to aggregate to ASO, the application aggregated in 31 seconds. Early versions of ASO also had a fair number of restrictions. Sadly, myths around these early restrictions remain today, although most of them have been removed, usually many years ago. The result is that many long-time Essbase developers still recommend to new Essbase developers that they start with block storage. We contend that the mindset needs to change from "I will start with BSO, unless it gets too big" to "Is there a reason why I should not use ASO?"

There are still some valid reasons not to choose an ASO solution, such as the need to store/write data to upper levels, or the need to run a procedural calculation script or to call external functions from a script, but this does not preclude a hybrid implementation using the benefits of both storage models. "Should I use ASO or BSO for this model?" is a common question, but the wrong one. "How should I use ASO and BSO together to create my Essbase application?" is a much better starting point. An Essbase application can consist of many databases.

Now that ASO can be the source and target of a partition, creating a composite model that fully satisfies users' requirements is more straightforward, and the resulting solution can leverage the power of both options.

We summarize advantages of BSO and ASO for your review in the context of optimization.

BSO has the following functional advantages:

  • You can input data to upper levels. For example, you can input a total charge at the All Markets and All Products levels, and then use an allocation method to push those values down to the individual product SKUs in the individual cities. Upper-level input is particularly useful when you want to do target budgeting or perform allocations (such as a corporate overhead charge).
  • You can precalculate every intersection. This means that query times (assuming the data request volume is synonymous) from request to request are consistent. In practice, many intersections of a BSO database are left to calculate dynamically at retrieval time. Total time period values (such as the total at Quarter 1) are often dynamically calculated as an overall efficiency practice.
  • Period-to-date reporting capabilities are built into BSO databases.
  • You can control calculation behavior down to the cell level. If you need to model a complex calculation process (for example, a goal-seeking calculation), you can control the process in detail with a calculation script.

ASO provides the following advantages:

  • In general, ASO databases are ideal for aggregating large data sets. They can have many millions members, with large dimensionality, and be sourced with hundreds of gigabytes of data.
  • ASO databases are cell-based, and these smaller structures (compared to blocks) tend to load rapidly. Additionally, because you are not aggregating large portions of the database, but rather strategic points, the data is available to your users with less system downtime. Running an aggregation process (while recommended for performance reasons) is optional. Because all upper-level values are dynamic, the values at upper levels calculate on retrieval immediately after loading data.
  • The overall structure of an ASO database is smaller than that of a BSO database. This, coupled with a smaller aggregation footprint, can lead to a disk footprint significantly smaller than with a BSO database.

When designing a database, ensuring proper batch load and calculation times are exceedingly important considerations. If you cannot provide data access to your users in a timely fashion, the deployment is perceived as a failure. Equally important, however, is the concept of query performance, as discussed in the next sections. You need to optimize that database for the types of query executed, the number and frequency of those queries, and the size of the user base. The parameters involved vary from ASO to BSO, and this is where some of the benefits of an ASO database become apparent—the number of things you need to consider is dramatically reduced.


Query Perfomance in Block Storage Databases

Essbase uses a patented storage algorithm for BSO databases that prevents the database explosion that can occur in OLAP databases. Essentially, space is allocated for the members in the dense dimensions whenever data is received against an intersection of sparse members. For example, consider the outline in Figure 1.

Let's assume that Year, Measures, and Scenario are flagged as dense, while Product and Market are sparse. The others are attribute dimensions and are not flagged.

When a Product is sold in a Market, a data block holding all values for Year, Measures, and Scenario is created. Thus, the dense/sparse setting controls the size of the block of data that is stored, as well as how many blocks are created. If Market was changed to dense, the database would store more information within the block, but there would be fewer blocks: one for each Product, rather than one for each Product/Market combination.


Sample outline for a block storage database

Figure 1. Sample outline for a block storage database


This is where the tuning of Essbase becomes more of an art. There is no right answer for the block size (although there may be some wrong ones!). The best block size depends on the nature and use of the application, as well as the hardware and operating system being used.

Various statistics are available (as shown in Figure 2) to help you understand the application and the data, so you can tune these settings. The volume of data is not necessarily the determining factor, but rather the distribution across the dimensions. The distribution across the sparse dimensions determines how many blocks will be created (before aggregation), while the spread across the dense dimensions will determine the block density (basically, how full the block is).

We mentioned that some of the dimensions in the outline were attribute dimensions. An attribute dimension does not affect storage, as it does not hold any data. It does not exist in its own right, but qualifies another dimension: product A is red; product B is green; the manager for the Eastern region is Jeremy. However, attribute dimensions do allow a greater flexibility of analysis. For example, if you request the total sales of red products, Essbase dynamically calculates the result by retrieving the associated members. Of course, whenever values are calculated at request time, there is a potential impact on retrieval speed.


BSO statistics in the Administration Services console

Figure 2. BSO statistics in the Administration Services console


Data Storage Settings Setting dimensions to dense impacts the amount of data stored in each block (that is, the block size). The theoretical block size is calculated by multiplying together the number of members in each dense dimension and then multiplying this by 8 (as each data point requires 8 bytes of storage). Clearly, if you store fewer members, you have a smaller block size, so Essbase allows you to control the storage of individual members. For example, you do not need to store a value for Time, because you can calculate it (and the value of the quarters) from the months. You can flag these members as Dynamic Calc members, so that you are storing 12 members (the months), instead of 17 (the months plus the quarters plus the year). Figure 3 shows a sample optimization to reduce the number of stored members.

Based on the dimensional statistics in Figure 3, you can calculate the impact of the optimization by comparing the Members in Dimension and Members Stored columns. Using the stored members, you can calculate the block size by multiplying


Reducing the number of stored members in dense dimensions reduces block size.

Figure 3. Reducing the number of stored members in dense dimensions reduces block size.


the dense dimensions together (12 * 9 * 2 = 216) and then multiplying by the 8 bytes storage (216 * 8). The result is 1,728 bytes. This compares with the theoretical block size (with no optimizations) of 13,680 bytes (19 * 18 * 5 * 8).

As well as the dynamic calculation setting, there are other options for controlling the size, such as flagging an item as a label. This tells Essbase that it is a placeholder in the outline, but no data will be stored there. Essbase will also automatically try to apply optimizations, so if a member has a single child, it will save space by converting it to an implied share.

Table 1 describes the member storage properties that affect block size and states what happens to the block size when each is used on a dense dimension.

For example, consider the following Scenario dimension with five members (recall that Scenario itself counts as a member):

  • Actual
  • Budget
  • Variance, with the formula @VAR(Actual, Budget);
  • Variance %, with the formula @VARPER(Actual, Budget);


Properties That Affect Block Size

table 1. Properties That Affect Block Size


If you set Variance and Variance % to Dynamic Calc, the dimension is reduced by two members, which is a reduction of 40 percent. Furthermore, if you set Scenario to Label Only, the dimension is reduced another 20 percent. This reduction has an immediate impact on the size of the block, and in turn, the size of the database, as all blocks are reduced in size.

Choosing Dense or Sparse When determining whether a dimension is dense or sparse, you need to examine the notion of data distribution. Look at each dimension in combination with each of the other dimensions and note where data exists more so than not. As shown in Figure 4, when data exists for many combinations, the dimensions are dense. When few combinations contain values, the dimensions are sparse.


Data distribution

Figure 4. Data distribution


The first distribution diagram in Figure 4 reveals that not all products are sold in all markets. For example, a particular drink might sell better in California than in Texas. However, if you have designed your model correctly, you generally have measures across various time periods. The second distribution diagram reflects that while a particular drink may not sell well in Texas as opposed to California, some drinks sell there, and so you have data for sales across time periods.

Block size is the next way to look at dense and sparse. For each application, there should be an optimal size. The Oracle Essbase Database Administrator's Guide suggests 8KB to 64KB; however, block sizes larger than 64KB are often better than ones that are less than 8KB. Simply stated, Essbase reads and writes data by block. Therefore, all input, output, and calculation transactions depend on this configuration. As an example, think of a sandy beach and several sizes of shovel. If the task is to move sand from point A to point B, a shovel the size of a teaspoon will be less efficient than one used for gardening by hand. A larger shovel, like one used for snow, can actually work against you, due to the weight of the sand.

As we have said, dimension size has a direct impact on block size. In the preceding examples, you saw that 1,000 accounts by 12 months means 12,000 cells. However, adding another dense dimension, say one with four members, increases the number of cells fourfold. Therefore, adding even a small dense dimension can have a serious impact. Remember that you do not sum the members of a dimension; you multiply them, so the more dimensions you mark as dense, the bigger the block and bigger the impact.


Scenario tagged as a dense dimension versus as a sparse dimension

Figure 5. Scenario tagged as a dense dimension versus as a sparse dimension


If you can split a block along a smaller dimension, you can quickly reduce the size. For example, you might make Scenario sparse, as shown in Figure 5. The reasoning is that frequently you do not need both actual and budget data available at the same time. Assume that you create the budget in the fall, and the actual data does not yet exist. If Scenario were dense, the block would have two times the amount of required data in memory. That means that you need to move more cells into memory than required, which translates to wasted time.

At this point, we will suggest that you review the Oracle Essbase Database Administrator's Guide and be prepared to experiment with different dense and sparse scenarios.


Query Performance in Aggregate Storage Databases

ASO databases load data at level 0 and derive all upper-level members and member formulas dynamically. To optimize retrieval performance, you can run an aggregation process on the database to build stored values at some upper-level intersections. After loading data, Essbase analyzes the source data and builds aggregates to optimize those queries that will take the longest to resolve based on the structure of the model. You can also have Essbase monitor query patterns of your user base, and then build aggregations to serve your specific queries more efficiently.

ASO databases have two primary dimension types to consider for optimization: stored and dynamic. Another dimension type, multiple hierarchies enabled, is a hybrid option that allows a portion of one dimension to be stored and other portions to be dynamic. The dimension types come into play when Essbase builds aggregates. Assume you have a database that has the following dimensions and storage properties:

  • Time (dynamic)
  • Measures (dynamic)
  • Markets (stored)
  • Customer (stored)
  • Products (stored)
  • Scenario (stored)

When the ASO engine builds aggregates, it looks at all members (including upper-level members) in stored dimensions to be part of an aggregate view. Suppose you have 4 million customers divided into five geographical regions and then a few hundred subregions. In this dimension, the ASO engine might build a series of aggregate views at the subregion level. In that case, the queries at the five geographical regions would not need to start adding up the entire group of individual customer accounts. Instead, ASO could start the math from the already derived subregions. That aggregation point (based on this example) might be a customer subregion for a given month for a given product line in North America, and be based on the current year scenario.

Because only stored hierarchies are considered for aggregate views, it is much more efficient to have stored dimensions in the outline. With dynamic dimensions, Essbase must derive the entire dimension from the bottom to provide query results (assuming you are querying at upper levels). The multiple hierarchies enabled dimension type would let you have a portion of a dimension be stored and other portions dynamic. You need to use a dynamic hierarchy in the following circumstances:

  • To use member formulas in a dimension
  • To use consolidation operators other than addition (+)
  • To have complex shared member structures (alternate hierarchy dimensions can generally accommodate simple shared hierarchies)

It is best to use dynamic dimension types for smaller dimensions (hundreds or thousands of members), as these can be derived very quickly. From most efficient to least efficient, the dimension types are stored, multiple hierarchies enabled, then dynamic.


Data Load Optimizations

Another important optimization involves streamlining how data is loaded. The first rule of thumb is to try to keep the data manipulation to a minimum. Yes, you can move and split fields in a load rule or replace strings, but this type of manipulation is better suited to a relational database source. When loading data from an SQL source, it is more efficient to present the data fields to Essbase in the correct order, and perform any manipulations and aggregations within the SQL statement.

Sorting for Block Storage Databases In a BSO application, the most efficient way to load data is to process each block only once. For example, you do not want to write the volume information for each product, and then go back and write the sales value afterwards. Therefore, the optimum sequence for loading data is to sort by the sparse dimensions first, followed by the dense dimensions, so that all the data for a block is processed together, and the block is written only once.

For example, assume that you have a database with Year, Region, Scenario, and Account dimensions. Region and Scenario are flagged as sparse dimensions; Year and Account are dense dimensions. With this configuration, there would be a block of Year by Account data for each combination of Region by Scenario. Figure 6 shows two ways to represent the data for load purposes.

In Figure 6, the file on the left is much less efficient than the one on the right. The right file is sorted by Region and Scenario, so each group of three records belongs to a particular block. Additionally, the example on the right has more data per record than the one on the left. With more data values in each record, Essbase does not need to process as many records.

buffers for Aggregate Storage Databases In an ASO database, the concept of blocks does not exist, and the organization of the data for optimum loading happens automatically in the load buffer. When large volumes of data are being loaded, the size of this buffer can be adjusted to ensure that performance is maintained, setting the Pending cache size limit option, as shown in Figure 7. In addition, discrete sets of data can be loaded into individual load buffers, and then the multiple buffers can be committed in a single action.


Calculation Optimizations

One of the principal differences between ASO and BSO is the ability to have procedural calculations. The potential requirement for having a procedural calculation in a deployment is a driving consideration when choosing the data storage model. Of the two models, it is much simpler to optimize ASO aggregations. That is not to say that optimizing calculation scripts is complex, but there is more to consider.


Improve data-load efficiency by processing sparse dimensions first.

Improve data-load efficiency by processing sparse dimensions first.

Figure 6. Improve data-load efficiency by processing sparse dimensions first.


Setting the size of the cache

Figure 7. Setting the size of the cache



Aggregations for Aggregate Storage Databases Within an ASO database, all calculations are dynamic (performed at retrieval time). Performance is maintained by designing aggregations based on the most expensive queries in the database. Based on the data loaded into the database, the ASO engine uses an algorithm to determine those queries that would take the longest. Using our previous example, if we load data for 4 million customers, a query that requests aggregated values across this dimension would require Essbase to add quite a bit of information. The ASO engine would look at the data distribution under this dimension and try to build aggregation points to help query performance across this dimension.

In general, the more aggregations in a database, the faster the performance. Aggregations, however, require disk space. More aggregate values mean more data storage space. As such, there is a point of diminishing returns. It may not benefit you to use an additional 10GB of disk space if the impact on performance might be negligible.

When optimizing the database, you can specify the amount of disk space you want to allow for aggregations. As shown in Figure 8, Essbase provides an impact analysis comparing query performance improvement and disk space. This lets you make an educated decision on how much space you use.


Essbase impact analysis

Figure 8. Essbase impact analysis.


In Figure 8, the horizontal axis on the graph represents the amount of disk space. The vertical axis represents the potential performance gain. In this example, there is a large performance gain over the initial set of aggregations. After that gain, the impact of additional aggregations appears to be somewhat limited.

To quantify the preceding points, a prime way to optimize aggregation behavior in ASO database is to allocate disk space. In addition to leveraging the default aggregation behavior, the administrator can enable query tracking to log which areas of the database are being hit most often, and use this information to overlay or replace the aggregations to fine-tune the performance. In this way, the ASO database becomes self-learning.

Another consideration for ASO aggregation optimization is the amount of time it takes Essbase to build the aggregations. You can optimize this process by allocating RAM to the ASO cache. This cache is used by ASO databases to optimize load and aggregation processes.

Calculation Scripts for Block Storage Databases As we discussed earlier, BSO databases rely on blocks. When running a calculation script on a BSO database, you are creating additional blocks, deriving values for existing blocks, or a combination of both. Remember that even if you are calculating only a single value in a block, the entire block comes into memory. The goal of optimizing a BSO calculation script is to calculate as few blocks and as few cells within those blocks as necessary. In short, do not do more work than is necessary to complete your analysis.

By default, Essbase processes only those parts of the database that have been amended. This behavior is called intelligent calculation. Intelligent calculation reduces the need for the developer to think about the process in too much detail. However, there are times when you need to override the default behavior. For example, if a driver value or exchange rate is changed, the whole database needs to be processed, not just those blocks that have been updated.

As with data loading optimization, you want to ensure that Essbase does not need to read or write to any block more than necessary—preferably only once. Therefore, when designing a calculation script, you want to process the database in a logical order and access only those parts of the database that are required. For example, if you load actual numbers for the current month, you do not want to process any of the budget or forecast data that may exist in the database. The IF, FIX, and EXCLUDE commands can be used to control the areas used for either a calculation as a whole or a subsection of it. For example, you could use a statement like Flx(Actual) to ensure that you calculate on only the Actual scenario.

Another key part of calculation performance is ensuring that Essbase has sufficient memory available to the script to be able to perform efficiently. As a starting point, you can design the database outline in what is known as the hourglass shape. An hourglass shape organizes the dense dimensions first, in decreasing order of size, followed by the sparse dimensions, in increasing order of size. Then you ensure that the memory allocated is sufficient to allow Essbase to anchor on the last two sparse dimensions.

You need to consider a series of caches and configuration settings when optimizing calculation processes on BSO database. The best resource for a detailed explanation and instructions of how to calculate these requirements is the Oracle Essbase Database Administrator's Guide. That said, the next section provides an overview on the various caches used by Essbase.

One final consideration for calculation script performance is the ability to process calculation tasks in parallel. Essbase can divide a calculation process into multiple tasks, and then run these tasks in parallel. The analysis of whether a script can be run across parallel tasks happens automatically in Essbase; however, you can also set some parameters the Essbase configuration file (essbase.cfg), located in the bin directory of the Essbase installation.

The Essbase configuration file is an optional file that contains a series of commands Essbase reads at startup. Many of these commands apply to the concept of optimization. For example, you can set default parallel calculation settings for all databases. For descriptions of specific settings, see the Oracle Essbase Technical Reference.


Optimizing Caches

The caches allow the control of memory allocation to a particular database. Clearly, it is advantageous to allocate as much memory as possible to a database, but this needs to be balanced against the other databases and applications on the server, as well as the total memory available. This is where the 64-bit operating systems provide the greatest performance advantages, with the ability to reference huge quantities of memory. The scope of this memory allows you to lock individual databases completely into memory or to build models that would traditionally be thought too big for an OLAP solution.

BSO databases have separate caches for the index and page file (metadata and data). Figures 9 and 10 show the related cache settings. To help you judge the settings for both of these caches, statistical information around the effectiveness of the caches is available in Administration Services Console. For more information, see the Oracle Essbase Database Administrator's Guide.


Settings for the BSO caches

Figure 9. Settings for the BSO caches


Run-time settings for BSO databases

Figure 10. Run-time settings for BSO databases



Several other caches exist in BSO databases. The data file cache setting is rarely used, as it relates to when you use Essbase in direct I/O mode, instead of letting the operating system manage the I/O. The calculator and dynamic calc caches are internal behaviors triggered by both the essbase.cfg settings and the outline order. For more information about these caches, see the Oracle Essbase Database Administrator's Guide.

Within an ASO database, there is a single cache, as there is not the same concept of separating the index and page files. Figure 11 shows the cache settings.

For more information about the cache settings, see the Oracle Essbase Database Administrator's Guide.


Backing Up Essbase

Given the strategic importance of Essbase to many organizations, administrators need to take steps to protect their data. This means integrating Essbase backups into routine database maintenance right from the start. Backups provide a way to restore a specific application or, for that matter, the entire production environment.


Run-time cache settings for ASO databases

Figure 11. Run-time cache settings for ASO databases


Manual backups are sufficient if created on a regular schedule, but automating your backups offers greater reliability and can provide peace of mind.

In this section, we introduce the various backup strategies that are available for Essbase. We start with the backup strategies that are available for both BSO and ASO databases. Then we examine strategies for BSO databases. We conclude this section with a list of files that need to be backed up in addition to the database.


Creating an Operating System Backup

The traditional approach to backing up servers is an operating system backup. The challenge from an Essbase point of view is that active database files are typically skipped because they are open for read-write access. So, if you can bring your Essbase environment down, this form of backup is fine. Otherwise, you will need to select one of the other backup strategies covered in this section.


Exporting the Database

One of the most common means of backing up BSO or ASO data is a database export. In an export, before you back up a database, you direct Essbase to write the database data to a text file. If the database fails, you can then reload the text file. Because some file management systems do not support large text files, the Essbase server automatically creates a series of 2GB files, appending a number to each file name (01,02, and so on), until all of the data is exported.

A database export can be executed using Administration Services, MaxL statements, or API methods. Figure 12 shows how you can use the context menu associated with the database node to open the Export Database dialog box.

In the Export Database dialog box, you start by specifying a file name. By default, the file will be placed in the folder where all of the applications are stored, rather than in a folder related to the application or database. Next, select one of three export options:

  • All data Exports all data values, input and calculated.
  • Level 0 data blocks Exports blocks of data associated with the lowest level or leaf nodes of a dimension.
  • Input level data blocks Exports blocks of data where data input, manual or mass, has occurred.


Choosing Database Export in Administration Services

Figure 12. Choosing Database Export in Administration Services



Finally, you can choose to export data in column format. This creates a tabular output that is particularly useful if the goal is to provide a copy of the data to another application. By default, Essbase exports data in a very compact, quasi­tabular format that can be reloaded very fast. You can also choose to execute the export using a background process, allowing you to continue working in Administration Services.

In the following MaxL example, data is exported to a file called input.txt. By default, only level 0 data is available for export.


export database DareOLAP.Complete input data to data_file 'input.txt';

To specify a subset of data, you can create and call a report script. The following MaxL example exports a database using the subset described in a report script called input.rep. The output file is input.txt.


export database DareOLAP.Complete using report_file 'input.rep' to
data_file 'input.txt';

More modern backup strategies exist for BSO databases. These are discussed in the next section.


Other Backup Strategies for Block Storage Databases

Starting with Essbase release 9.3, new backup methods became available for BSO databases. These include exports through calculation scripts, automated Essbase backup and restore, and transaction replay. All of these methods can be performed manually or automatically.

Creating a Calculation Script Essbase release 9.3 introduced the ability to export data from BSO databases through calculation scripts. A subset data export enables you to specify with more detail the data you would like to export. You can automate calculation scripts using MaxL.

The following MaxL example executes a calculation script called Export for the database called complete within the application called DareOLAP.


execute calculation DareOLAP.Export on database Complete;

Three database export targets are available via a calculation script: text, binary, and relational. Here, we will look at examples of how to set data export options for each of the target types using the SET DATAEXPORTOPTIONS command. For more information about this and other commands, see the Oracle Essbase icaTechnl Reference.

The following script exports data to a text file. It specifies a data export level of 0 (which means the lowest members in the hierarchy), and then limits output to sales data with values of 1,000 or greater. Next, the script fixes the data slice for the specified dimensions members. Finally, the export target is specified as a text file located at b:\exports\jan.txt. Data will be separated using comma (,) delimiters, and any missing data values are represented by #MI.



DataExportLevel "LEVEL0";


DATAEXPORTCOND ("Sales">=1000);

FIX ("100-10","New York","Actual","Sales");

DATAEXPORT "File" "," "b:\exports\jan.txt" "#MI";


When you export data using the binary method, Essbase creates the export file using the same bitmap compression technique that is used by the Essbase kernel. This makes for a very fast and efficient export process, using minimal disk storage. Binary exports can be fixed only on sparse dimensions. The following script exports all New York data blocks to a binary file located at b:\backup\newyork.bin.



DataExportLevel "ALL";


FIX ("New York");

DATAEXPORT "BinFile" "b:\backup\newyork.bin";


In the next example, selected records are inserted directly into the table named NEWYORK in a relational database. All data is exported for the selected dimension members. In the DATAEXPORT statement, a data source name (DSN) is used to establish a connection, commonly via ODBC, to the CUR_SALES relational database. The table name is specified as NEWYORK, followed by the user name and password for the relational database.



DataExportLevel "ALL";


FIX("100-10","New York","Actual","Sales");

DATAEXPORT "DSN" "cur_sale" "newyork" "admin" "password";


Archiving and Restoring a Database Beginning with Essbase release 11, administrators can back up and restore BSO databases using the automated Essbase backup and restore capabilities. The automated backup copies the database object and data files to an archive location that you specify, and from which you can quickly restore the database. You must have the Administrator role to use the automated backup feature.

When backing up a database, Essbase performs the following tasks:

  • Place the database in read-only mode, protecting the database from updates during the archive process while allowing requests to query the database.
  • Write a copy of the database files to an archive file that resides on the computer hosting the Essbase server.
  • Return the database to read-write mode.


    Figure 13 shows how you can use the context menu associated with the database node to open the Archive Database dialog box. In this dialog box, provide the file name. Unless otherwise specified, the file is placed in the ARBORPATH/app folder. Because this process must place the database in read-only mode, the Force archive option may be used to drop users from the database. Users may reconnect to the database, but they will not be allowed read-write access. The Archive in the background option allows you to continue working in Essbase Administration Services while the archive proceeds.

    You can use MaxL to automate the archive process. The following example backs up the complete database within the DareOLAP application, overwriting the existing archive file.

    alter database DareOLAP.Complete force archive to file /Hyperion/



 Choosing Database Archive in Administration Services

Figure 13. Choosing Database Archive in Administration Services



Choosing Database Restore in Administration Services

Figure 14. Choosing Database Restore in Administration Services


Restoring a database is the reverse of archiving a database. You must have the Administrator role to terminate active client connections and restore a database. Figure 14 shows how to use the context menu associated with the database node to open the Restore Database dialog box. The options in this dialog box work like the corresponding ones in the Archive Database dialog box.

You can also use MaxL to automate the restore process. The following example restores the Complete database within the DareOLAP application.


alter database DareOLAP.Complete force restore from file /Hyperion/

For more information, about Essbase automated backup and restore, see the EPM System Backup and Recovery Guide.

Tracking Transactions Another new feature of Essbase release 11 is called transaction replay. With this approach, administrators can instruct Essbase to capture each transaction executed for later use. This is very valuable, because you have a single source of all actions that may need to be repeated in case of server or application failure. It eliminates the guesswork.

Transaction capture may be enabled at the server, application, or database level. Essentially, this means that you may specify all applications, all databases within an application, or a specific database. To enable transaction logging, the administrator creates a directory on the Essbase server, and then specifies the directory location via an Essbase configuration setting. The following example shows the essbase.cfg file setting used to enables transaction logging for all databases associated with the DareOLAP application. The log is written to a folder called C:\Hyperion\ trlog.



To view the transaction history, right-click the database name, choose Display Transactions from the menu, and fill in the Display Transactions dialog box, as shown in Figure 15. You can view transactions from the last replay or from a specific date and time. Additionally, you may redirect the output to a file, rather than view it on screen.

To replay transactions, right-click the database name, choose Replay Transactions from the menu, and fill in the Replay Transactions dialog box, as shown in Figure 8-17. You can replay transactions from the last replay or from a specific date and time. Additionally, you may execute the replay in background.

You can use MaxL to automate the replay process. The following example replays the transactions in the DareOLAP. Complete database with sequence IDs 1 through 10 and 20 through 100.


alter database DareOLAP.Complete replay transactions using
sequence_id_range 1 to 10, 20 to 100;

Figure 15. Choosing Display Transactions in Administration Services



Choosing Replay Transaction

FIGURE 8-17. Choosing Replay Transaction in Administration Services


The next example replays all transactions that were logged after a specified date and time.


alter database DareOLAP.Complete replay transactions after



Other Important Files to Back Up

The previous sections provided you with many options for backup and restore. However, exporting the data from a database is just part of a good backup strategy. Several other files associated with the applications, databases, and Essbase Server are important for recovery.

In general, all files from the following folders should be backed up:

  • ARBORPATH/app/appname
  • ARBORPATH/app/appname/dbname

Additionally, the following files should be backed up:

  • ESSBASEPATH/bin/essbase.sec (the Essbase security file)
  • ESSBASEPATH/bin/essbase.bak (the backup of the Essbase security file)
  • ESSBASEPATH/bin/essbase.cfg (Essbase server configuration settings)

You can find the values of ARBORPATH and ESSBASEPATH by examining the environment variables for the server on which Essbase was installed.



Oracle OLAP and Oracle Essbase are significantly different, both in optimization strategies and backup processes. Oracle OLAP, as part of the Oracle database, benefits from the configuration and tuning efforts performed on the database itself. A well-performing database goes a long way to ensuring acceptable load times and query response times for Oracle OLAP data. In addition, when an Oracle database is backed up, so is the Oracle OLAP data. In contrast, Essbase, as a stand-alone multidimensional database, has a variety of optimizations available to tune performance. Many of the optimizations differ based on the data storage model used (ASO or BSO). As with any database, Essbase has its own backup and restore procedures.



Oracle Corporation. Oracle OLAP Developer's Guide 10g.

Oracle Corporation. Oracle OLAP Developers Guide 11g.

Oracle Corporation. "Relevant Diagnostic Parameters for the Oracle OLAP option." Oracle wiki ( the+Oracle+OLAP+option).

EPM Information Development Team. Oracle Essbase Database Administrator's Guide, Release 11.1.1. Oracle Corporation, 2008.


Вас заинтересует / Intresting for you:

Automating Databases Processes...
Automating Databases Processes... 2581 views Akmaral Wed, 02 May 2018, 17:52:10
Oracle Engineered Systems for ...
Oracle Engineered Systems for ... 1255 views Александров Попков Fri, 15 Jun 2018, 14:05:15
Oracle OLAP and Essbase archit...
Oracle OLAP and Essbase archit... 4963 views Андрей Волков Thu, 19 Sep 2019, 14:35:37
Oracle BI 12c Overall componen...
Oracle BI 12c Overall componen... 3166 views Илья Дергунов Wed, 02 May 2018, 14:48:42