Release Management/ERP sizing
A sizing is an approximation of the hardware resources required to support a specific software implementation, in this case Openbravo ERP. This document set the first foundations towards sizing the hardware required to run Openbravo ERP based on certain defined conditions.
- Gather real data
- Define test plan
- Create the measuring tool
- Set up environments and run tests
- Performance benchmark analysis
- Create sizing guide
- Verify/validate the results
- Publish the results
Define the hardware to be used in Openbravo ERP based on certain information provided by the end user.
How precise do we want to be?
- Do simple questions in the questionnaires. This is key to make the rest of the work valid.
- Use dedicated servers for the testing.
- Don't measure too much/little.
The goal is to prepare the most effective questions as possible. Type of information to be covered:
- Processing (CPU).
- Number of concurrent users.
- Distribution of user types based on their usage (simple, average, complex).
- Database size: number of products, business partners, invoices, etc.
Gather real data
We already have many users around the world using Openbravo ERP, and we have the Heartbeat that provides information for some of these installations. Additionally the support team has information about real installations, which we can use to feed our database and tweak the final results.
Define test plan
The goal is to first define the different ways in which the resources are used through the application. So we need to first defined a set of workflows, with 3 different user profiles, based on the complexity of their usage:
We will ask the QA and the Platform Teams to help us writing these workflows.
Also, we need to define what the expected speed/delay is. As a general rule any page should not take more than 3 seconds to load. This parameter must also be measured against the available network bandwidth.
Create the measuring tool tests
We have chosen Apache JMeter as the performance measuring tool. The goal of this step is to translate the workflows into JMeter tests.
Define hardware environments
We will use Amazon EC2 as the base to choose the machines, using the following combinations and parameters:
- Architectures: x86 and x86_64
- Processors: normal, high, very high (number of cores, speed).
- Memory: number of GBs, speed.
- HDD: number of GBs, speed, RAID configuration.
- Operating systems: Windows, Linux.
And also the client side requirements.
- Number of users per GB of memory.
- Does 64bit matter?
- Benefits of add cores (processing units).
- Impact of the HDD speed.
- Clustering is possible. What are the benefits?
- Database with and without performance tunning.
Set up environments and run tests
The goal is to follow and run the tests using our tool and build a database out of this measurements. By creating a matrix of the hardware parameters vs the user requirements.
- The server must not be using other resources, it must be completely dedicated.
- In order to check the consistency of the results some tests might be needed to be run multiple times.
- Set up measuring tools to see the machine load in the previously defined parameters.
- Define the interval between the tests.
Performance benchmark analysis
The goal is to interpret the collected data and translate it into real requirements: CPU, Memory, Disk, etc.
Create sizing guide
The goal is to create the questionnaire and the possible hardware recommendations based on those answers.
Verify/validate the results
Do a real world test, verify that it is working fine. Tweak the parameters if required. Also verify with the Support team and the Heartbeat data.
Publish the results
It's time to deliver it to the community, customers and the sales team. It's important to choose the way in which the results are showed.