View source | Discuss this page | Page history | Printable version   

Reporting Server/Reporting Table and Load Script Development


This document discusses the two main artifacts related to the reporting database, also called the reporting datawarehouse:

In the description we will follow the standard path, specialties (like custom load logic) are discussed in a separate document.

Advise: Table versus Materialized View

When looking at the structure of a reporting database the advise is to do the following:

For example consider a reporting table holding information of c_orderline:

From a performance perspective it is interesting to note that refreshing materialized views is quite fast, a standard postgresql install on a M5 AWS machine can refresh a materialized view with a rate of 2-3million records/minute.

Table in the reporting database


The reporting database is a normal sql database within the database server.

Reporting database.png

As you can see a reporting table is prefixed by the module DB prefix (similar to an Openbravo table).

Similar to the tables in the Openbravo database, the definition of a table in the reporting database is represented in the source code in two locations:

Openbravo table in reporting srcdb folder.png

The definition in the application dictionary is visible through the Reporting Tables and Columns window.

Openbravo table in ad.png

Creating, extending

To create or change the reporting table definition do the following steps:

  1. change the definition directly in the database server (also add indexes there)
  2. execute ant export.reporting.model.full within the module. Make sure the module is flagged to be in development. This will read the database model directly from the database server and update the model in the xml files as well as in the application dictionary.

To extend a reporting table with a column through another module add it to the table with the 'em_' prefix followed by the module db prefix. Then when exporting the reporting model it will be added to the corresponding module. See here for an example.

Openbravo extending table custom column.png

Creating the Reporting Database

You can create the reporting database directly from the xml definitions by executing this command in the module folder:

ant create.reporting.database

To update the reporting database, make the required changes to the xml, drop the database from the database server and then run the above command.

Load Script

The load scripts are responsible for loading, updating and refreshing data in the reporting database. Load scripts are executed within the data loader application in a certain order, scripts with the same sorting/order number can be executed in parallel, providing large load time benefits.


In its base form the load script is a sql select statement which is executed in the source Openbravo database and its results are used to insert data into the reporting database. Any valid query is allowed, you can do joins, inner selects etc. The only requirement for the standard load script is that the columns it returns in the select clause should use the alias corresponding to the column name of the reporting table in the target database.

Next to the standard load script there are the following types:

Openbravo load script.png

The other fields of the load script definition:

SQL Script

The standard load script is the most common form. Its SQL script can be any SQL select statement. The SQL script is executed in the source database. The main requirement for the SQL script is that the columns it returns in the select clause should use the alias corresponding to the column name of the reporting table in the target database.

Here is an example:

    bp.c_bpartner_id AS obrtmd_bpartner_id,
    bp.value AS value, AS name,
    bp.iscustomer AS iscustomer,
    bp.isvendor AS isvendor,
    bp.c_bp_group_id AS obrtmd_bp_group_id
 FROM c_bpartner bp
 WHERE bp.ad_client_id IN ('0', $$)
 AND bp.updated > $updatedFrom$
 AND bp.updated < $updatedTo$

The SQL script has extension points defined. When you create your own script make sure to include these in your script. The following extension points are supported:

Script extension is discussed in more detail below.

Incremental Load and Client Filtering Parameters

The script above also shows three other parameters which can be good to use. The parameters are enclosed between $:


The SQL Script can be extended through custom modules with Update Script Extension subtab. You can extend the select, from, where and group by clause.

Openbravo load script extension.png

View resulting Query with extensions

To view the query with its extensions inserted visit the Update Script Content subtab.

Openbravo load script extension content.png

Update script

Next to the standard load script the data loader application can also run direct update scripts. Update scripts do not load information from a source transactional database.

Update scripts are part of the overall ETL/ELT functionality and are run in the same ordering as load scripts. So it is for example possible to first load data from the source database and then in a next step update this information.

An update script is run in the reporting database so you can use update with select statements accessing other tables and materialized views in the reporting database. However, you need to make sure (by setting the script order) that your update script runs after all its dependent tables/materialized-views have been updated/refreshed.

An example of an update script is the script to update the organization tree after the tree information has been loaded from the source database.

Load script sample update script.png

Some things to note:

Purge script

There is a new custom update script that supports purging data of specific tables in reporting to lower the data volume stored in tables, resulting in better reporting performance.

This custom update is defined as follows:


To run the purge process, please ensure that Active field is marked.

If activated, the custom update will look for a JSON file called data-purge-definition.json with the settings to use, so the first step is to create this file:

Having this definition:

  "setup": {
    "purgeFrom": {
      "type": "month",
      "quantity": "13"
    "executionDay": "SUNDAY"
  "steps": [
      "name": "obrts_order_sales",
      "timestampColumn": "created"
      "name": "obrts_orderline_tax",
      "timestampColumn": "",
      "joinTable": {
        "name": "obrts_orderline_sales",
        "parentColumnToJoin": "obrts_orderline_sales_id",
        "columnToJoin": "obrts_orderline_sales_id",
        "timestampColumn": "created"

The different properties are:

When the JSON file is created, in the next execution of the load process the purger will be executed and it will purge the data according to the definition.

Run the load/update script

You can run the a load script directly if the are correctly setup in your environment. To run the load script, select a record and click the run script button. Select a client and press done.

Retrieved from ""

This page has been accessed 9,468 times. This page was last modified on 14 June 2023, at 13:56. Content is available under Creative Commons Attribution-ShareAlike 2.5 Spain License.