View source | Discuss this page | Page history | Printable version   
Main Page
Upload file
What links here
Recent changes

PDF Books
Add page
Show collection (0 pages)
Collections help


System Settings To Discover Performance Problems



This article tries to give some tips to configure the system to make easier to discover problems with performance.


To have correct logs with the proper information is of paramount importance to detect what caused problems. It is a good practice to regularly take a look to these logs, even actual problems have not been detected in this way, it is sometimes possible to address issues before they arise.

A good log is not a log containing detailed information about everything done in the system. For example, logging all queries in database, gives an exhaustive information about everything happened there, but on the other hand it is extremely difficult to read it because it can become really huge and it also requires much resources just to write it making slower the entire system.


This is the log Openbravo writes, it is configured in config/log4j.lcf file. From this file it is possible to configure where (and which name) the file is written in, the default log level, the format it has, etc. Note that when modifying this file, the changes to take effect it must be deployed and Tomcat restarted (ant smartbuild and Tomcat restart).

When reading a log, it is very useful to know when each line was written. In versions previous to 2.50MP31 and 3.0MP1, by default, log included time but didn't do date. It is very recommendable to also include date to be facilitate reading it. This can be done modifying ConversionPattern lines to something like %d{ISO8601} [%t] %-5p %c - %m%n.

PostgreSQL log

Logging slow queries

PostgreSQL allows to display in log all queries, including values for its parameters, taking longer than a minimum time. This is very useful to understand which are the slower queries, how much time they take to be executed and how often they are used.

This is defined with the log_min_duration_statement parameter in postgresql.conf file (location of this file varies on the system installation). The value assigned to this parameter is an integer number indicating the minimum time in milliseconds to log queries taking more than this time. The amount of time to log depends on the system, but 1000 to log queries longer than 1 second is a good rule of thumb to start with. Also note, it logs "wall time", not CPU time, so in case of a lock affecting a query, this query will appear as slow even in other circumstances it wouldn't have any problem and the query to fix might be the one causing the lock.

After modifying this value, it is not required to restart postgresql. Reloading it is enough, so it can be modified without affecting users in the application. Depending on system, reload can be done executing /etc/init.d/postgresql reload

Log file location also depends on configuration. It typically, can be found in /var/log/postgresql directory.

Memory dump on OutOfMemoryError

As discussed in this article, when an out of memory occurs, it is very handy to have a memory dump file to be able to analyze it.

It is possible to get this file automatically whenever OutOfMemoryError occurs. This can be configured in the CATLINA_OPTS used by Tomcat running Openbravo by adding XX:-HeapDumpOnOutOfMemoryError, you can also specify the path to save the dumps in with -XX:HeapDumpPath=path-to-dump-file.

Retrieved from ""

This page has been accessed 2,517 times. This page was last modified on 12 June 2012, at 08:27. Content is available under Creative Commons Attribution-ShareAlike 2.5 Spain License.