Scalable Counters for Web Applications

So you need to provide a count or counter for your web application, but you want it to scale. The naive approach would be to simply select count(*) from table. That will fail under load because it requires scanning your entire collection.

The first question you need to ask is, Do you need exact counts or will approximate counts be good enough? I bet in many situations, an approximate count will be perfectly reasonable. Think about the use case of tracking web hits. When you're talking about millions of hits, what is the difference between 1,000,000 and 1,000,001? Of course, only your business expert will know if approximate or exact answers are required. The decision, though, is crucial because it's the difference between an easy implementation and a hard (costly) implementation.

Let's say, for the purposes of this article, that you'll need very close to accurate counts, plus you need to scale a lot. The first step is to pre-calculate the count, and cache the results. When a new web hit occurs, grab the current count, add one, and put it back. This approach will scale for a while, but the chance of missing a count goes up as load goes up. Because we're not explicitly locking on the row (which can be expensive), the last person to write the record back to the database wins.

The next option is to wrap the "grab record, increment, put record back" inside a locking transaction. This will ensure that only one writer can access the counter at a time. This ensures an accurate count, but will greatly slow down the site as contention around the single counter increases.

The third option, and the best option, is to split the counter up into smaller counters. When it's time to get the full, single count, simply grab all the counter partitions and add them up. For very high loads, increase the number of partitions. The theory is it's quick to add up 100 partitions, while you're providing 100 different counters to lock around.

How do you pick which partition to increment? One easy way is to create a hash of the timestamp (or some other part of the request that changes frequently) of the request, and mod it on the number of partitions in the system. The theory here is you'll be spreading the load across the partitions as the number of concurrent requests increases.

In any scalable web system, reads should be by key and writes are expensive. Do whatever you can to read a single object by a key, and minimize your writes. Minimize the contention around objects in the data store, too. Realize that ad hoc queries can almost always be implemented by pre-calculating the answers, so that an ad hoc query is simply retrieving a record by a key (instead of scanning through all rows, computing the answer as you go.)

For more on this technique, I recommend the excellent video Builing Scalable Web Applications with Google App Engine.

Popular posts from this blog

Lists and arrays in Dart

Converting Array to List in Scala

Null-aware operators in Dart