• Home
  • blog
  • Editorial: To Benchmark, or Not to Benchmark?

Editorial: To Benchmark, or Not to Benchmark?

You might have seen some headlines recently about Google’s plans to retire their Octane JavaScript benchmark suite. If you’re not aware of this or didn’t read past the headline, let me briefly recap. Google introduced Octane to replace the industry-standard SunSpider benchmark. SunSpider was created by Apple’s Safari team and was one of the first JavaScript benchmarks.

There were two problems with SunSpider. First of all, it was based on microbenchmarks (think testing the creation of a new array thousands of times) which didn’t reflect real-world usage very accurately. Secondly, SunSpider rankings came to carry a lot of weight among browser makers, resulting in some optimizing their JavaScript engines for better benchmark scores rather than the needs of real programs. In some instances, these tweaks even lead to production code running slower than before!

Octane focused on trying to create tests that more accurately simulated real workloads, and became a standard against which JavaScript imp─║ementations were measured. However, browser makers have once again caught up and we’re seeing optimizations tailored to Octane’s tests. That’s not to say that benchmarks haven’t been useful. The competition between browsers has resulted in massive improvements to JavaScript performance across the board.

Vaguely interesting, you might say, but how does this affect my job day-to-day as a developer? Benchmarks are often cited when trying to convince people of the benefits of framework y over framework x, and some people put a lot of importance in these numbers. Last week I noticed a new UI library called MoonJS doing the rounds on some of the news aggregators. MoonJS positions itself as a ‘minimal, blazing fast’ library, and cites some benchmark figures to try to back that up.

Continue reading %Editorial: To Benchmark, or Not to Benchmark?%