How to effectively benchmark using Java Microbenchmark Harness (JMH)

In this article, we will discuss benchmarking, provide a brief introduction to the Java Microbenchmark Harness with working examples

What is benchmarking?

Benchmarking is the practice of running code to compare the relative performance of different implementations of a data structure or algorithm.

Why benchmark?

Knowing which algorithms or data structures provide better performance for a given programming task allows you to design and implement higher performance code in general. Benchmarking helps you make those decisions.

The Java Microbenchmark Harness

What is the Java Microbenchmark Harness?

The Java Microbenchmark Harness, JMH, was developed as part of the OpenJDK project to provide a benchmarking tool for Java addressing these challenges. Due to the warmup time and dynamic optimization performed by the Java Virtual Machine, Java presents a challenging benchmarking environment. JMH abstracts away all the complexity and provides developers with a fairly straightforward API to perform benchmarking.

Using the Java Microbenchmark Harness

According to the official OpenJDK JMH document, the recommended way to set up a JMH benchmark project is to use the command line.

First create a pom file:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi="" xsi:schemaLocation="">


Run the following the command line in the directory with the pom file:

$ mvn archetype:org.apache.maven.plugins:maven-archetype-plugin:2.4:generate 

The above command will insert the following into your pom file:


It will also create the following subdirectories and files:


You must then implement a benchmark class by editing the generated file (see below.) Once you have implemented your benchmark you can build and verify the project:

	$ cd benchmark/
	$ mvn clean verify

Finally run the benchmark:

	 java -jar target/benchmarks.jar

Implementing a Benchmark Class

The generated file looks like this:

package com.talentify;

import org.openjdk.jmh.annotations.Benchmark;

public class MyBenchmark {

	public void testMethod() {
    	// This is a demo/sample template for building your JMH benchmarks. Edit as needed.
    	// Put your benchmark code here.

JMH makes heavy use of annotations. We see the @Benchmark annotation above. This indicates that this method will be benchmarked. Another critical annotation is BenchmarkMode, which indicates the types of benchmarks to run. It can be placed on a benchmark method or the entire benchmark class to run those benchmarks on all benchmark methods of that class.

The actual benchmarks are enumerated in the Mode enum. They include, for example, the average time per operation and operations per unit of time. We will illustrate how to use this annotation and several others in the example below.

Example: ArrayList vs LinkedList

Recall that the Java standard library includes both a LinkedList and an ArrayList implementation of the List interface. We will run a benchmark to compare the performance of the sort method on each of the classes.

We will use the following annotations:

  • State We mark our class with the State annotation. This tells the JHM how our class’s state will be shared between the JHM’s worker threads.
  • BenchmarkMode We already discussed the BenchmarkMode annotation above.
  • Param This annotation indicates which parameters to configure for the benchmark. Note that the use of this annotation requires the State annotation on our class.
  • Setup This annotation indicates which method to run before the benchmark. Note that the use of this annotation requires the State annotation on our class.
  • Fork This annotation indicates forking parameters for the benchmark.
  • Warmup This annotation indicates warmup parameters for the benchmark.
  • OutputTimeUnit This annotation indicates the time unit in which the results will be present in.
  • Benchmark We already discussed the Benchmark annotation above.
      import java.util.ArrayList;
      import java.util.Comparator;
      import java.util.LinkedList;
      import java.util.List;
      import java.util.Random;
      import java.util.concurrent.TimeUnit;
      import org.openjdk.jmh.annotations.BenchmarkMode;
      import org.openjdk.jmh.annotations.Benchmark;
      import org.openjdk.jmh.annotations.Fork;
      import org.openjdk.jmh.annotations.Level;
      import org.openjdk.jmh.annotations.Mode;
      import org.openjdk.jmh.annotations.OutputTimeUnit;
      import org.openjdk.jmh.annotations.Param;
      import org.openjdk.jmh.annotations.Scope;
      import org.openjdk.jmh.annotations.Setup;
      import org.openjdk.jmh.annotations.State;
      import org.openjdk.jmh.annotations.Warmup;
      // use the benchmark scope for this state class
      // compute an average time benchmark
      public class MyBenchmark {
      	// run three benchmarks with 1000, 10000 and 100000 size lists
      	@Param({"1000", "10000", "100000"})
      	// the size of our lists
      	public int listSize;
      	// the array list to benchmark
      	public ArrayList<Integer> arrayList;
      	// the linked list to benchmark
      	public LinkedList<Integer> linkedList;
      	// our setup method
      	public void setUp() {
      		// create a list with random integers
      		List list = new Random()
          	// initialize our array list with the values generated above
      		arrayList = new ArrayList<>(list);
      		// initialize our linked list with the values generated above
      		linkedList = new LinkedList<>(list); 
      	// create a single fork with a single warmup run
      	@Fork(value = 1, warmups = 1)
      	// perform a single warmup iteration
      	@Warmup(iterations = 1)
      	// output the results in microseconds
      	// indicate that this is a benchmark method
      	public void testArrayListSort() {
      		// sort our array list
      	// create a single fork with a single warmup run
      	@Fork(value = 1, warmups = 1)
      	// perform a single warmup iteration
      	@Warmup(iterations = 1)
      	// output the results in microseconds
      	// indicate that this is a benchmark method
      	public void testLinkedListSort() {
      		// sort our linked list

Running the benchmark will output (omitting considerable verbose details)

Benchmark                       (listSize)  Mode  Cnt     Score     Error  Units
MyBenchmark.testArrayListSort         1000  avgt    5     2.994 ±   0.076  us/op
MyBenchmark.testArrayListSort        10000  avgt    5    43.363 ±   5.172  us/op
MyBenchmark.testArrayListSort       100000  avgt    5   844.443 ± 154.691  us/op
MyBenchmark.testLinkedListSort        1000  avgt    5    11.643 ±   3.248  us/op
MyBenchmark.testLinkedListSort       10000  avgt    5   136.231 ±  13.280  us/op
MyBenchmark.testLinkedListSort      100000  avgt    5  3125.115 ± 505.440  us/op

We observe that sorting an ArrayList is several times faster than sorting a LinkedList.


In this article, we briefly discussed the Java Microbenchmark Harness: JMH. The interested reader is encouraged to explore the JMH in more detail. An official list of examples can be found here. The official JMH article on avoiding pitfalls can be found here.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.