These Zing and HotSpot benchmarks use equivalent versions of Java SE 8 (unless noted otherwise). In every case, physical servers, cloud instances, available memory, and core counts were the same, using identical x86-64 server processors with identical clock rates.
Here you will find charts highlighting peak latency and throughput for given service levels across workloads that include:
- Apache Cassandra
- Apache Kafka
- Red Hat JBoss Data Grid
- Apache Spark
- Apache Solr
No experimental features have been used in these benchmarks — only currently-available production-level compilers, garbage collectors, and Java runtimes. In all test cases, Azul ensures that the systems under test are running code that is through all appropriate warm-up cycles and is operating at levels of peak optimization. These Zing benchmarks are run using the LLVM-based Falcon JIT compiler and the C4 collector. The Oracle and OpenJDK benchmarks all use the C2 optimizing JIT compiler. In many cases, Oracle Java GC performance testing includes both G1 and CMS collectors.
Zing is designed to deliver improved metrics on standard Java workloads — and it does, delivering improved cluster carrying capacity and lower CPU utilization across a wide number of use cases. Zing is also great for corner cases — if your require consistently low latency or need to access multi-TB in-memory data stores, Zing handles these difficult workloads with ease.
While benchmarks like the ones listed here provide a valuable perspective, we encourage you to try Zing on your site using your application and workloads. We’ve made it easy to download and use Zing immediately — begin by downloading the Zing bits for your target Linux distro at https://www.azul.com/zingtrial/.
Elasticsearch performance: OpenJDK vs. Zing
This benchmark shows Elastic performance using the HotSpot JVM configured with the low-latency CMS collector running four benchmark passes, comparing the observed latency percentiles from 50 to 99.999 (five nines) with four runs of Zing using the LLVM-powered Falcon JIT compiler. Zing delivers consistently low latency with this Elastic workload in every pass except c8, while HotSpot struggles with massive pauses between the 99.9 and 99.99th percentiles.
Apache Cassandra Performance: OpenJDK vs. Zing
Cassandra and its closed-source counterpart Datastax Enterprise take full advantage of Zing’s better memory management technology as well as the power of the Falcon JIT compiler. In SLA-sensitive applications, Zing typically delivers anywhere from 3-7X more transactions per second in Cassandra workloads vs. Cassandra plus OpenJDK or Oracle HotSpot. Azul has performed a large number of Cassandra performance tests using Zing — these are just a subset highlighting the improved carrying capacity enabled by the Zing JVM.
Here’s a summary of one recent in-house benchmark at a video advertising company :
Production Cassandra cluster: 6x AWS i3.2xlarge instances
Transaction mix: Approx. 80/20 write/read split
Critical SLA metric was for Read operations:
- 20ms at 99.9%
- 50ms at 99.99%
- 100ms at 99.998%
Oracle HotSpot/OpenJDK with the G1 collector can sustain ~4K TPS before SLA breach
Zing can maintain 21K TPS before SLA breach — the same 6 AWS nodes can support 5X more transactions without re-tuning or re-architecting the application simply by replacing the HotSpot-based JVM with Zing.
Chart A — note the order of magnitude of the JVM-related pauses on the y-axis — the delta between Oracle HotSpot and Zing is highlighted in this series.
Chart C shows the positive performance delta between Zing and HotSpot on the Cassandra “Bursty” benchmark suite.
As noted above, Azul has had extensive experience working with large (and some web-scale) Cassandra deployments. Cassandra performs extremely well with Zing and requires minimal tuning for optimal results. As can be seen in Chart A (above), the Y-axis tells the complete story regarding Zing vs. OpenJDK in Cassandra environments. With a 30K TPS transaction load, OpenJDK delivers multiple 200 msec stalls across a 5-minute (warmed up) test run, while the worst outcome Zing delivered was a single 1.4 msec pause.
At every transaction rate, Zing will deliver better and more consistent results with Cassandra, while simultaneously increasing the carrying capacity of on-premise or Cloud-based infrastructure.
Apache Kafka Performance: OpenJDK vs. Zing
Zing supports a variety of Apache Kafka workloads. Kafka is a high-throughput, low-latency platform for distributing and processing high volumes of near-real-time data. Written in both Java and Scala (both languages run just fine on server-based JVMs,) Kafka takes full advantage of Zing’s support for high allocation rates, jitter-free pauseless operation, and its ability to handle high-throughout operations without suffering from the performance artifacts that can plague large Java deployments
Red Hat JBoss Data Grid Performance: OpenJDK vs. Zing
Red Hat® JBoss® Data Grid, is an intelligent, distributed caching solution. It provides the ability to handle higher transaction volumes, meet strict uptime requirements, deploy into hybrid clouds and access accurate, real-time information. Azul Zing is ideal for grid-based in-memory deployments leveraging larger data nodes in transactional Big Data applications. Zing allows organizations to accelerate time to market, improve application runtime consistency and deliver higher sustained throughput for their JBoss Data Grid-based implementations. In fact, Red Hat makes Zing subscriptions available for JBoss Data Grid customers — for additional details read this press release. You can also download the Red Hat Solution Brief for additional detail.
Apache Spark Performance: OpenJDK vs. Zing
This series takes data normalized across several different Spark tests, comparing OpenJDK 8u192 to Zing 19.2. The results highlight the power of Zing’s Falcon JIT compiler, which has been extensively tuned for Spark performance over the past 24 months.
Apache Solr Performance: OpenJDK vs. Zing
Note: This benchmark was originally published by Azul Deputy CTO Simon Ritter on the Azul Systems blog.
Recently, here at Azul we decided to compare the performance of Solr running on Zing using C4 to Oracle’s Hotspot JVM using G1 for garbage collection. To do this, we worked with one of our customers who runs Solr on 200 production nodes. They also use a good size data set of 20Tb, which would provide a realistic comparison between the two JVMs.
To make things fair, the tests were run on paired workloads using identical configurations. The configurations were:
- Full-text search on a machine with 32Gb RAM configured for a 16Gb JVM heap.
- Metadata search on a machine with 8Gb RAM configured for a 4Gb JVM heap.
As I’ve said before, people sometimes think that because Zing can scale all the way up to a heap of 2Tb that it must need bigger heaps to work efficiently. This is definitely not the case; using a 4Gb heap for the second set of tests is certainly not big by today’s standards.
The tests being run were not artificially created benchmarks, but an evaluation of how a real production system works. You can’t really get a better test scenario than that when evaluating performance.
Again, to ensure fairness and to eliminate any short-term effects on the systems they were both run for forty-eight hours. We wanted to make sure that we would see any long term trends in our data.
To assess the performance of each JVM we used jHiccup to measure the latency associated with the JVM whilst running the tests. jHiccup is a great tool for this because it runs in the same JVM running the code under test but does not interact directly with the application code. jHiccup measures the latency that is observed by the application caused by everything that is not the application itself; i.e. the JVM, the operating system and the physical hardware. Because we’re running the same application code on identical hardware, any differences in observed latency will be purely down to differences in the way the JVMs work internally. You can find out more information about jHiccup here.
I don’t think it’s an understatement to say that the results were conclusive that Zing performs significantly better in this scenario. The graphs that follow show the latency associated with Hotspot using the G1 garbage collector on the left and Zing using the C4 collector on the right.
For the full-text tests we got these results:
The maximum pause time for Oracle’s JVM was 1667ms, and for Zing it was 67ms. Even taking out most of the outliers from Hotspot (and why would you since this will impact your performance) it’s clear from the graph that Hotspot’s latency is consistently about 250ms.
With the scale of the graph it’s hard to tell for Zing, so let’s look at the graph scaled to the values recorded.
Doing this we can see that with Zing the observed latency is consistently 15-20ms. From our observations of Zing on other workloads and doing further analysis we can say that 10ms of that latency is from the OS and hardware, not the JVM. This is consistent with the operating system having a small number of cores and many threads (so lots of context switching involved).
For the second, metadata search test, we get these results:
In this case, the maximum latency for Hotspot was a massive 184, 684ms (that’s over three minutes!) compared to Zing with a maximum latency of 107ms.
Again, scaling the Zing graph to see the data more clearly we get this.
It’s said a picture is worth a thousand words and I think these graphs really speak for themselves.
Note: the Solr benchmark summary above was published by Simon Ritter on the Azul Systems blog. You can see Simon’s complete Solr post here: https://www.azul.com/searching-performance-apache-solr/
Our field, Engineering, and Support organizations include many Java performance specialists with many years of experience in benchmarking and implementing Zing across many use cases and deployment models.
If you are ready to try Zing on your site, running your workloads and monitoring tools, do not hesitate to contact an Azul representative.