Projects:ApplicationScoped ADCS/Technical Specifications
Contents |
Introduction
ApplicationDictionaryCachedStructures
(ADCS) caches Application Dictionary components, it is mainly used from FormInitializationComponent
to improve performance by reducing the DB reads. Currently is SessionScoped
which means there is an instance for each backoffice session.
Depending on the usage (this cache is lazy) the amount of heap space it can retain can increase to several MB. In instances with many concurrent sessions, the total retained size can hugely increase.
Measurements
The size of this cache significantly varies depending on the actual application usage. Checking in some real customers the retained sizes per instance vary from ~700KB to ~45MB, being in average ~13MB. Which is more than 1GB per 100 concurrent sessions.
The expected improvement is to reduce this per session size to a single ADCS instance which will be bigger than per session average (expected to be less than 100MB in any case) but will scale much better.
Application Scoped
To solve this problem and make much more scalable in terms of memory the increase of concurrent sessions, the proposal is to move from session to application scoped this cache. In this manner, a single ADCS instance would serve all active sessions.
Considerations
In order to do this movement, two main factors need to be taken into consideration.
Cached data
Data that is cached in ADCS must be globally valid, this is, it cannot contain any session specific information such as language, session role, etc.
In this regards, the main refactor required to be done is in ComboTableData
cache, where current session's list of accessible clients and organizations where cached. Now, this must be resolved when the actual query is executed, so the new key for this cache will be just the field the combo is for so that possible validations and window access level can also be cached.
Concurrency
The other main concern is about concurrency. Being ADCS session scoped the concurrecy in this cache was really low as the chances of multiple threads per session are reduced. Now that the scope is going to be application, the probability of concurrent use of this cache significantly increases.
This implies some refactors require to be done:
- Thread safety: the internal structures implementing the caches must either support concurrency or be guarded by locks.
- Contention: the contention should be reduced at much as possible. Currently there are two method synchronized at instance level: getTab and initializeDALObject. Keeping this synchronizations as they are now, would cause huge contention as only one tab could be got or a dal object could be initialized at a time:
- getTab: in case the tab is already cached, no lock is required (after initialization this is the most common case), only for initialization locking is required, but even in this case, it is possible to acquire the lock only for the tab id to be initialized. This would allow to initialize in parallel several tabs while if there in parallel two request for the same tab that's not yet initialized, one of them would do the actual initialization and the rest would just wait it to complete.
- initializeDALObject: the lock can be acquired at dal object level
The following graph represents the ADCS invocation flow: