Scaling an application normally means you have a successful application and there is lots of demand. Unfortunately that demand can be immediate (within seconds / minutes) instead of spread over days or weeks. So it is best to have a plan prior to the firehose of demanding users.
Here are a few basic rules for scaling:
- Service the request without a network call
- Service the request with the least amount of network calls
- Service the request with the smallest payload
Lets see how we can make design choices that will support these basic rules for scaling.
First and foremost is to build the ability to service a request by not making a network call. This seems elusive since in almost every case the information that the user is requesting is not located on the server that their request was made. We have all experienced the pro’s and con’s of streaming video applications. The leading providers of streaming video applications will employ the use of buffers. Buffering or caching the video is where the application begins to download some of what you are about to watch and once enough has been downloaded the application starts playing the video smoothly. This is a form of near caching by which the cache is stored as close to the requesting user as possible. So just as the video file is cached prior to playback so can your applications response data.
This means that you will need to implement a caching solution that runs in the client. You’ll want features such as evicting the cache and even restoring the cache from the clients file system upon restart and invalidating the client cache upon server mutation. The performance gain can be tremendous since you have eliminated the need for network calls. Hazelcast calls this near cache.
Now on to the second basic rule for scaling by reducing the number of network calls. This one goes back to the late 70’s when some of the first stored procedures were created. The idea is simple don’t move the data to the process instead run the process where the data lives. This approach was the fastest for decades and is still the lifeline to many successful companies today. The primary drawback to this approach is the stored procedure is typically developed in a database venders proprietary language. As modern computer languages evolved the need to shift the logic from the stored procedure to a more general purpose computer language has become important. As the data needs grow this ultimately leads to having to divide the data up. This means what use to be centrally located in a database is now deployed across numerous servers that all participate in a cluster.
Hazelcast partitions the data across all the active servers by performing a consistent hashing algorithm. For more on this please refer to “When to increase partition count” by Enes Akar. This algorithm allows clients and servers alike to locate which active server is responsible for any given data. The same algorithm is also used for remote execution of a process that is performed on behalf of data. This is how we can drastically reduce the amount of network calls by running a process on the server where the data is located. Hazelcast calls this an Entry Processor and is similar to a distributed implementation of a stored procedure and written in Java instead of SQL/database vendor specific language.
Finally when dealing with any distributed solution the biggest impact to overall performance is how much data is placed on the wire. This is called serialization and Java’s default implementation has significant performance issues. Please refer to “Maximizing Hazelcast Performance with Serialization” by Fuad Malikov. As illustrated by the aforementioned article every serialization tested doubled the read throughput of the standard java serialization.
So there you have it. In short to scale big you adhere to some basic principles.
“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius—and a lot of courage—to move in the opposite direction.” — E. F. Schumacker