ExpressConnect Docs

HomeProduct OverviewExpressConnectCore

Cache Framework

The performance of any API is a core concern and is an important KPI for any API Consumer. The response time for the search (GET) operations is fundamental to the usability of the APIs. The digitisation trends and the move to the microservices architecture require the underlying components to respond with the least latency possible. Any bottleneck in the whole lifecycle renders the use cases useless for the End users.

Traditionally, the TRIRIGA APIs are cached based on the request URIs. The data is in the cache for a limited period, as caching will clear the old data off the cache store. And because the rate for cache miss is high as different query string value generates a different URI, the underlying data store/database ends up serving most of the requests.

The modern requirements to cater to usage spikes, instant synchronisation of cache, and the required data availability in cache-store generally need heavy customisations. These requirements typically will lead to an external solution at a high cost.

The Cache provider component is dynamic, aware of the underlying data modal changes, and will auto-adjust the cache data model. Additionally, an Admin can adjust the cache size and other parameters through the ExpressAPI Native App.

We can pin certain data sets in memory through the settings app to prevent the data from being evicted from the cache-store.

The Cache store can auto-warm (pre-populate the data) on the server’s start-up. It can be kept up to date through an “Event” subscription for real-time updates or through configured scheduled refreshes.

The cache also features a distributed caching capability. This capability means that in scenarios with more than one App Server, the same cache data is available in all those servers and in sync.