Projects:Retail Master Data Synchronization
Contents |
Introduction
This project has as goal to improve master data loading by the WebPOS client (or other mobile applications) from the server.
The main topic is to handle larger volumes of master data by implementing paged requests to the server to load master data.
Also as part of this project, the consistency checking of loaded data on the client will be improved to ensure that, if the application has been loaded, it will be completely functional.
Technical Specification
This project has been implemented in several steps, each one implementing a small feature or improvement.
Phase 1: Consistency Checks
The first phase implements ad improves several checks to ensure that, if the application has finished loading and the POS window is shown, the application is completely functional and all required data has been loaded. The following checks have been implemented:
- If a terminal property or a master data model fails to load during the full load process, a popup will be shown informing the error to the user and forcing a full refresh of the data.
- Improved the xhr request handling to ensure that all error situations are handled, both connection errors and server-side errors.
- Model checksums will be set after the local table has been created, to preven inconsistent situations in which the checksum exists but the table is missing.
- Model update timestamps and full/incremental refresh timestamps will be set after the corresponding process has finished, instead of setting them at the beginning of the process.
- Created a terminal property handling API to ensure that error handling is done for properties loading.
Phase 2: Implement Paged Requests
The second phase takes care of implementing a mechanism to request model data in batches of 35000 records. This has been done to improve both server and client-side performance.
After processing each batch of records of a model, the next batch is requested and processed. This is repeated until the model is fully loaded.
The server-side performance improvement comes from the fact that each query needs to handle a smaller set of data.
On the other hand, the client side performance improvement comes from the reduction of the xhr response objects (as a reference data, a 130000 product response was roughly 80 megabytes). Browsers on low-end devices, such as tablets or computers with a very limited memory, usually crash when handling very big objects. With paged requests the size of the javascript objects is limited and therefore low-end devices are capable of loading big master data models.
Phase 3: Serialized Loading and Visual Login
This phase implements both a performance improvement on the server and a visual improvement on the client.
- The master data loading process has been serialized, meaning that a model won't be loaded until the previous model has finished loading. This improves the server-side performance and escalability since it reduces the amount of parallel database connections.
- The loading image of the login process has been refactored into a progress bar which also shows a small informative label explaining the current loading step.
Phase 4: Further improvements
This last phase implements an additional improvement to the consistency checks. If the application has master data loaded, and the next full master data load fails, it is not always necessary to force a full refresh.
If a model fails to load and the request was not part of a paged request, the old model data is still there, so it should be possible to work, even with old data. The next full or incremental refresh will take care of updating the data.
Therefore, the only case in which the application will force a full refresh is if a paged request has failed, only after it has already loaded part of the model (because the first load would have deleted the old data).
Tested Scenarios
In order to verify this project several scenarios have been tried, combining different situations, such as online and offline connections, failing and correct preference loading and failing and correct master data loading.
No previous loading done
For the following cases a popup is shown explaining the error and prompting to reload or logout to fix the problem. In all cases, if the error is fixed the login process proceeds normally and the application loads correctly:
- A model fails to load during full master data load.
- The terminal object or terminal properties fail to load.
- A model or terminal properties fail to load and inmediately after the connection is lost.
- The connection is lost during properties loading.
- The connection is lost during master data load.
- Attempted to login offline without data in the browser cache.
- Attempted to login offline with data cached but without masterdata.
A full refresh has already been done
Since the terminal object is a critical part of the application, these are the only cases in an incremental load in which a reload will be required to try to request the object again:
- The terminal object fails to load.
- The connection is lost while loading the terminal object.
The following cases will be transparently ignored to allow the user to continue working:
- A model fails to load during the incremental refresh.
- The terminal properties fail to load.
- The connection is lost during properties loading.
- The connection is lost during master data load.
Full refresh after a full refresh has been done
These two cases act transparently since the old data is still present in the model:
- A non paged model fails to load.
- The first paged request of a model fails to load.
Since the first request will wipe the old data, a full refresh will be required if:
- A paged (second or later) request of a model fails to load.
Automated Tests
Since this project modifies the entire loading process, all tests are implicitly checking that this project is correctly working.
A new test will be added with a big master data model (~100000 products) to verify that the paged loading is working.
Additional Documentation
Feature Request: