Simple
HomePerformanceThu, 20 Nov 2008 07:55:10 GMT  Download  

This section provides some comparisons of Simple's performance against two popular NIO based Java servers, Jetty and AsyncWeb. The goal of a performance comparison is to determine what kind of service a server is capable of by comparing it with a known quantity, like Jetty. The results shown here illustrate the concurrency capability of Simple under stress, and its ability to keep response times per connection high during such stress. The comparison between servers was done with JRockit setting the -Xms300 and -Xmx600m parameters for each virtual machine.

Throughput

The first comparison done was with Jetty 6.1.12. The two primary measurements taken were response rate for each connection and the total number of bytes sent by each server over the duration of the test. The response rate statistics gathered measure the throughput by sampling each connection and calculating the overall responses sent per second. To ensure environmental issues did not affect either test, both servers were tested twice. As can be seen from the tests Simple delivered a much higher number of requests per second than Jetty for each test, also both tests produce similar results.

Each test was performed by applying a constant load on each server for the same static resource, which was a 12 kilobyte file. This provides an indication of how the container performs with little or no processing overhead on the server. This is important as it measures the containers ability to respond to requests and deliver content without the results being influenced by processing time generating the content. The test was run for just over five hundred samples with each sample being taken every second.

Below in figure 1 the results from each test are graphed together to provide an overall picture of the throughput. Combining the results in a single graph shows the similarity of each test run and provides a good comparison between the servers. As can be seen for both runs Simple performs better, with signifigantly higher response rates.





Figure 1  The response rate comparison with Jetty

To provide a clearer picture of the individual runs figure 2 and 3 show the first and second runs side by side. The graphs shown below represent the same test runs as those shown in figure 1. This is just another perspective on the same information.



Figure 2  The first test


Figure 3  The second test

The response rate graphs shown above show the response rate as taken for each sample. To ensure that sampling did not provide an advantage to either server the graph of the total number of bytes sent is provided. Over the duration of the test the sum of the bytes received is was calculated and is illustrated in figure 4 below. These results were gathered for the same test as was done for figure 1 above. As above Simple shows higher throughput over the duration of the test





Figure 4  The total number of bytes sent over time

Scalability

To provide an indication of scalability a performance test was taken with an ever increasing number of concurrent clients. This shows how the response rate is affected as the number of connections and requests is increased. Here a 2 kilobyte body is delivered by the server for each request. The request rate is increased from a single client to two hundred concurrent clients, each responding to 16 requests per pipeline.

Figure 5 below illustrates how the response times for each test run where the number of concurrent clients in increased by one. As can be seen Simple performs much better as load increases. Both Jetty and AsyncWeb degrade in performance as concurrency is increased.





Figure 5  The response time for increasing concurrency

These scalability measurements make more sense when compared against the number of bytes sent for each increment in the number of concurrent clients. As can be seen each server sent an almost identical number of bytes, differing only in header size which can be expected as all three servers have their own HTTP response headers. This makes the scalability readings more signifigant, as it shows for the same number of bytes Simple completes the scenario much faster and scales better.





Figure 6  The bytes sent for increasing concurrency

Memory

During the scalability tests the heap profile for each server was taken. Each server shows reasonably similar memory consumption, however simple seems to spend more time collecting transient objects. This is likely due to the number of transient buffers allocated per request. It is also likely that both Jetty and AsyncWeb recycle large buffers more frequently and thus have less transient objects collected in the nursery (the first generation of garbage collection).





Figure 7  The memory profile for each server

Also just to illustrate the overall memory footprint of Simple a graph of the number of kilobytes consumed in non-heap memory is shown. This seems to coincide with the number of classes loaded for each server. Simple, being self contained, does not require as many classes as either Jetty or AsyncWeb and thus has a smaller memory footprint.





Figure 8  The non-heap memory used


Figure 9  The number of classes loaded






SourceForge Logo