Design for the Near Future

I spent some time at a large tech company, and one of the things I learned there from someone with experience in large systems was a simple way to think about scaling software:

  1. Determine the size of the problem, which is often an amount of data or rate of interaction (and sometimes both).
  2. Design a system to handle ten times the current size (a scale of 10x).
  3. Accept that as the size approaches ten times that size (a scale of 100x), the system will need to be redesigned.

The purpose of these guidelines are to provide constraints, without which good design is impossible.

Don’t expect a design to scale by a factor of 100 without changing.

If you were working on a web application that handled 10 requests per second, any new features being added to the application should be designed to sustain 100 requests a second. This kind of scaling can typically be handled by running more instances of the code in production and shouldn’t require any large changes to the system’s design.

As usage of the application continues to grow past 100 and toward 1000 requests a second, though, querying the database might become a bottleneck. This might lead to the use of a different database querying or replication strategy: a different design.

A design is inherently dictated by the size of the problem it is attempting to solve.

The realization that solving the same problem at hugely different scales requires significantly different solutions was eye opening to me. This mindset provides reasonable constraints on how far any given solution can be expected to scale, and in doing so, allows a developer to focus on solving the problems of today and tomorrow and not those of the unpredictable future.