An Effective TTL-Caching Pattern

Microservice architecture comes with a high cost of service dependencies which impacts the overall performance of the system. In a typical setup, when a request hits the service, it first needs to grab data from its dependent apis and after which it will perform more operations and return the final result back to the caller. Dependent apis may further call other apis or talk to a DB in order the return the requested data and hence creating a complex graph of dependencies. 

Caching is an effective way to improve the performance of such system. In many cases, you might be able to cache the response of the dependent apis locally and avoid making those extra calls specially if the data is somewhat static in nature. TTL is a great feature which allows you to cache a data for certain duration after which it expires. The pattern looks something like this:


First check if the data is cached. If not, get the data from the original source and cache it for future calls. Recently, I worked on a summary dashboard that polls the service every x seconds to get the latest status of some background jobs and I used the above caching technique to improve the overall performance of the widget. The big question is what TTL to use? There are two issues with selecting TTL
  1. High TTL value means stale data for the duration of TTL
  2. Lower TTL means bad hit/miss ratio
Both are not good. I want to have frequently updated cache and good hit/miss ratio. An efficient technique is to use combination of both. The patter is simple:
  1. Cache the ORIGINAL_KEY and data with high TTL value.
  2. Cache another REFRESH_KEY (ORIGINAL_KEY + 'refresh') with lower TTL.
  3. When data is requested, get it from the cache using ORIGINAL_KEY and return it. Also invoke a task in the background that would check if the REFRESH_KEY is expired. If yes - fetch the data from the original source and update ORIGINAL_KEY with new data.

Since the update to the ORIGINAL_KEY happens in the background, it doesn't impact the api performance. This simple yet powerful pattern takes performance to the next level! using this, I have  achieved both - good hit/miss ratio and frequent updates to the cache.


Popular posts from this blog

Break functional and orchestration responsibilities for better testability

Microservices and Tech Stack - Lessons Learned