Architecture Patterns: Caching(Part 1)

The first steps to understanding caching

Kislay Verma
9 min readFeb 26, 2022

Originally published on my website — https://kislayverma.com/software-architecture/architecture-patterns-caching-part-1/

Performance has always been a key feature of technical systems. Today on the internet, sub-second latencies are the norm. It costs companies money if their pages load slowly because potential customers won’t wait longer than that. On the other hand, there is more and more data from many different sources which have to be loaded into a rich user experience (think of the number of things going on a typical FB page). This data gathering problem is further exacerbated by the trend towards microservices.

Given all this, how are we to build super-fast systems?

What is caching?

Caching is the general term used for storing some frequently read data temporarily in a place from where it can be read much faster than reading it from the source (database, file system, service whatever). This reduction in data reading time reduces the system’s latency. This also increases the system’s overall throughput because requests are being served faster and hence more requests can be served per unit time.

--

--