How to Distribute Java Application on Multiple JVMs

Overview

Distributing a Java application on multiple JVMs allows to process more user requests just by adding more hosts. Distributed systems can be incredibly powerful, but they have their share of challenges such as scalability, fault tolerance, consistency, and concurrency. This article explains how adding a distributed cache to the application can solve these challenges.

Introduction

Java Application In a Single JVM

As business grows, applications must process increasing number of requests. The requests may be coming from application's users or may be generated by internal processes. The amount of load that a single-JVM application can handle has a hard limit. The limit is set by the capacity of the host the application is running on. Once this limit is reached, the application cannot process more requests.

Distributing Java applications on multiple JVMs allows applications to handle increasing load that couldn't be handled by a single machine, by using combined power of multiple hosts.

Benefits of Distributing Web Applications

Distributing an application has three main benefits:

A cluster is a group of hosts, usually connected by a high-speed LAN that is dedicated to executing a particular set of tasks. A Java application that can run in a cluster, can process more requests because each host handles its own share of the load. The combined processing power of the cluster is much higher than the capacity of a single host, and it increases as more hosts, also known as nodes, are added to the cluster.

Distributed Java applications provide better availability by running multiple application instances in the cluster. Even if a single node fails, applications running on other nodes continue to process user requests.

Java applications running in a cluster offer reduced latency by handling lesser load as compared to a single large JVM instance, and by having shorter garbage collections resulting from smaller heaps.

Architectures for Distributed Applications

Distributed application architectures can be divided into three main categories:

This artcile concentrates on the multi-tier architecture as this is the most popular architecture for developing buiness applications. An overview of share-nothing and hybrid architectures is provided in a separate artcile, the link is at the bottom of this page.

Multi-tier Architecture

A multi-tier architecture consists of multiple hosts running web applications under management of application servers such as Tomcat or Jetty. The application is usually partitioned into tiers, each responsible for it's own, clearly defined part of work. A typical multi-tier web application be organized into the following tiers:

The multi-tier architecture works best for business applications such as finance, social media, analytics, OLAP/OLTP etc where the reliable single source of truth such as a database is mandatory. The key strengths of this architecture are:

The multi-tier architecture delivers consistency so critical for business applications thanks to the strong transactional guarantee of the databases managing storing the data at the back end of the application. The database ensures that application data is stored reliably even if the application instances fail. In addition to the durable storage, the database provides applications shared access to the persistent application state.

The multi-tier architecture achieves fault-tolerance by running application servers on multiple hosts connected to the same database so that if any of cluster hosts fails, the other hosts continue processing requests.

Unfortunately, the same strengths that make the multi-tier architecture popular make it notoriously hard to scale horizontally, by adding more hosts to the cluster. The following section Distributing Multi-tier Applications discussed the challenges associated with scaling multi-tier architectures and the ways to solve them in detail.

Share-nothing Architecture

Share-nothing architecture consists of multiple web servers each serving its own copy of data. This architecture works best for static data such as web pages, text, images and videos. The key advantages of this architecture are:

A share-nothing architecture scales horizontally well because it doesn't have to access a shared data source such a database base or file system. It also offers maximum concurrency because hosts don't compete for shared resources. The fault-tolerance is easily achievable by running multiple hosts serving a copy of the data so that if any of cluster hosts fails, the others simply continue processing requests. Share-nothing architecture is often implemented as a single-tier system using a web server such as Apache or Nginx.

The main disadvantage of the share-nothing architecture is that it doesn't offer data consistency. The share-nothing architecture is rarely used on its own when developing business web applications using Java because data consistency is important. The main source of inconsistency is the delay synchronizing copies of the data in the cluster, which leads to extended periods of time during that the hosts in the cluster respond to requests with different versions of data.

Hybrid Architecture

A hybrid application architecture usually consists of a combination of a cluster running business part of the application implemented using a multi-tier architecture and a share-nothing cluster or a content delivery network (CDN) servicing requests for static data such as images, CSS, videos etc. Because of the disjoint set of disadvantages of the combined architectures, the scalability of the hybrid architecture is limited by the scalability of the business application that drives the consumption of the static data. To scale the hybrid architecture horizontally by adding more hosts to the cluster, the same scalability challenges characterizing multi-tier applications need to be addressed.

Distributing Multi-tier Java Web Applications

While the database plays an important role of reliable transactional data storage, this role also makes it a main bottleneck in the distributed application. In order to satisfy ACID requirements, the database must process the requests sequentially. As the number of application instances grows, they become increasingly stuck in waiting for responses from the database leaving the benefit of added processing power unrealized.

So, the main task when distributing multi-tier Java web applications is making them scalable horizontally by reducing or eliminating the bottleneck caused by the serialized data sources such as database, file systems or remote web services. This task can be archived by reducing the need to go to the database by caching the frequently requested data. A cache API serves the data from memory in a highly concurrent manner, which allows adding more hosts to the cluster because the application doesn't have to go through the database bottleneck anymore.

In addition to enabling horizontal scalability by reducing the database access, adding caching to the web tier increases application performance by reducing time spent executing business logic and reducing the expense of rendering the results.

To distribute a Java application on multiple JVMs, two things need to be added to the application and supporting infrastructure:

  1. A load balancer
  2. A distributed cache API

Load Balancer

Distributed Java Application on Multiple JVMs

A load balancer in front of the cluster makes sure that all nodes receive fair share of user requests. A hardware load balancer is usually a best option as it provides maximum performance. Companies such F5 and Cisco are known for good hardware load balancers. If your budged cannot afford a hardware load balancer, an Apache server running a combination of mod_proxy, mod_rewrite and mod_redundancy can be another option.

Distributed Cache API

A distributed cache API such as Cacheonix enables horizontal scalability by reducing or eliminating the need for accessing serialized data sources such as databases.

It significantly reduces the number of requests to the database by keeping all frequently accessed data in memory of a large distributed data store. The distributed cache API creates this store by partitioning and distributing cached data so that each JVM carries a piece of the bigger cache.

Because application threads are running on separate JVMs, they cannot access their shared data directly anymore. That's is why it is critical that the distributed cache API provides strict data consistency when caching the shared data in presents of updates and cluster members failing or joining the cluster. When the data is updated on a single JVM, a distributed cache API distributes data modifications reliably to all remote JVMs, even when hosts fail or join the cluster. Without consistency, applications will be dealing with data conflicts caused by missed updates, which is usually a serious problem for business applications.

Conclusion

A multi-tier web application can be distributed in a cluster by adding a distributed cache API with strong consistency guarantees hat removes scalability bottlenecks by caching frequently requested data in memory.

Adding Distributed Cache API to Your Application

Cacheonix is an Open Source distributed Java cache offers strict data consistency, a Hibernate cache plugin and a web cache API. To add Cacheonix to your Maven project, add the following to the dependencies section of your pom.xml:

<dependency>
   <groupId>org.cacheonix</groupId>
   <artifactId>cacheonix-core</artifactId>
   <version>2.3.1</version>
<dependency>

See Also

Share This Article