Massive: The asm.js Benchmark

asm.js is a subset of JavaScript that is very easy to optimize. Most often it is generated by a compiler, such as Emscripten, from C or C++ code. The result can run at very high speeds, close to that of the same code compiled natively. For that reason, Emscripten and asm.js are useful for things like 3D game engines, which are usually large and complex C++ codebases that need to be fast, and indeed top companies in the game industry have adopted this approach, for example Unity and Epic, and you can see it in action in the Humble Mozilla Bundle, which recently ran.

As asm.js code becomes more common, it is important to be able to measure performance on it. There are of course plenty of existing benchmarks, including Octane which contains one asm.js test, and JetStream which contains several. However, even those do not contain very large code samples, and massive codebases are challenging in particular ways. For example, just loading a page with such a script can take significant time while the browser parses it, causing a pause that is annoying to the user.

A recent benchmark from Unity measures the performance of their game engine, which (when ported to the web) is a large asm.js codebase. Given the high popularity of the Unity engine among developers, this is an excellent benchmark for game performance in browsers, as real-world as it can get, and also it tests large-scale asm.js. It does however focus on game performance as a whole, taking into account both WebGL and JavaScript execution speed. For games, that overall result is often what you care about, but it is also interesting to measure asm.js on its own.

Benchmarking asm.js specifically

Massive is a benchmark that measures asm.js performance specifically. It contains several large, real-world codebases: Poppler, SQLite, Lua and Box2D; see the FAQ on the massive site for more details on each of those.

Massive reports an overall score, summarizing it’s individual measurements. This score can help browser vendors track their performance over time and point to areas where improvements are needed, and for developers it can provide a simple way to get an idea of how fast asm.js execution is on a particular device and browser.

Importantly, Massive does not only test throughput. As already mentioned, large codebases can affect startup time, and they can also affect responsiveness and other important aspects of the user experience. Massive therefore tests, in addition to throughput, how long it takes the browser to load a large codebase, and how responsive it is while doing so. It also tests how consistent performance is. Once again, see the FAQ for more details on each of those.

Massive has been developed openly on github from day one, and we’ve solicited and received feedback from many relevant parties. Over the last few months Massive development has been in beta while we received comments, and there are currently no substantial outstanding issues, so we are ready to announce the first stable version, Massive 1.0.

Massive tests multiple aspects of performance, in new ways, so it is possible something is not being measured in an optimal manner, and of course bugs always exist in software. However, by developing Massive in the open and thereby giving everyone the chance to inspect it and report issues, and by having a lengthy beta period, we believe we have the best possible chance of a reliable result. Of course, if you do find something wrong, please file an issue! General feedback is of course always welcome as well.

Massive performance over time

Massive is brand-new, but it is still interesting to look at how it performs on older browsers (“retroactively”), because if it measures something useful, and if browsers are moving in the right direction, then we should see Massive improve over time, even on browser versions that were released long before Massive existed. The graph below shows Firefox performance from version 14 (released 2012-07-17, over 2 years ago) and version 32 (which became the stable version in September 2014):

Higher numbers are better, so we can indeed see that Massive scores do follow the expected pattern of improvement, with Firefox’s Massive score rising to around 6x its starting point 2 years ago. Note that the Massive score is not “linear” in the sense that 6x the score means 6x the performance, as it is calculated using the geometric mean (like Octane), however, the individual scores it averages are mostly linear. A 6x improvement therefore does represent a very large and significant speedup.

Looking more closely at the changes over time, we can see which features landed in each of those versions of Firefox where we can see a significant improvement:

There are three big jumps in Firefox’s Massive score, each annotated:

  • Firefox 22 introduced OdinMonkey, an optimization module for asm.js code. By specifically optimizing asm.js content, it almost doubled Firefox’s Massive score. (At the time, of course, Massive didn’t exist; but we measured speedups on other benchmarks.)
  • Firefox 26 parses async scripts off of the main thread. This avoids the browser or page becoming nonresponsive while the script loads. For asm.js content, not only parsing but also compilation happens in the background, making the user experience even smoother. Also in Firefox 26 are general optimizations for float32 operations, which appear in one of the Massive tests.
  • Firefox 29 caches asm.js code: The second time you visit the same site, previously-compiled asm.js code will just be loaded from disk, avoiding any compilation pause at all. Another speedup in this version is that the previous float32 optimizations are fully optimized in asm.js code as well.

Large codebases, and why we need a new benchmark

Each of those features is expected to improve asm.js performance, so it makes sense to see large speedups there. So far, everything looks pretty much as we would expect. However, a fourth milestone is noted on that graph, and it doesn’t cause any speedup. That feature is IonMonkey, which landed in Firefox 18. IonMonkey was a new optimizing compiler for Firefox, and it provided very large speedups on most common browser benchmarks. Why, then, doesn’t it show any benefit in Massive?

IonMonkey does help very significantly on small asm.js codebases. But in its original release in Firefox 18 (see more details in the P.S. below), IonMonkey did not do well on very large ones – as a complex optimizing compiler, compilation time is not necessarily linear, which means that large scripts can take very large amounts of time to compile. IonMonkey therefore included a script size limit – over a certain size, IonMonkey simply never kicks in. This explains why Massive does not improve on Firefox 18, when IonMonkey landed – Massive contains very large codebases, and IonMonkey at the time could not actually run on them.

That shows exactly why a benchmark like Massive is necessary, as other benchmarks did show speedups upon IonMonkey’s launch. In other words, Massive is measuring something that other benchmarks do not. And that thing – large asm.js codebases – is becoming more and more important.

(P.S. IonMonkey’s script size limit prevented large codebases from being optimized when IonMonkey originally launched, but that limit has been relaxed over time, and practically does not exist today. This is possible through compilation on a background thread, interruptible compilation, and just straightforward improvements to compilation speed, all of which make it feasible to compile larger and larger functions. Exciting general improvements to JavaScript engines are constantly happening across the board!)

About Robert Nyman [Editor emeritus]

Technical Evangelist & Editor of Mozilla Hacks. Gives talks & blogs about HTML5, JavaScript & the Open Web. Robert is a strong believer in HTML5 and the Open Web and has been working since 1999 with Front End development for the web - in Sweden and in New York City. He regularly also blogs at http://robertnyman.com and loves to travel and meet people.

More articles by Robert Nyman [Editor emeritus]…

About Alon Zakai

Alon is on the research team at Mozilla, where he works primarily on Emscripten, a compiler from C and C++ to JavaScript. Alon founded the Emscripten project in 2010.

More articles by Alon Zakai…


13 comments

  1. M. Edward (Ed) Borasky

    Fascinating – has anyone looked at the current Mozilla JavaScript engines vs. V8 on Node.js?

    November 3rd, 2014 at 02:43

    1. Anique

      Yup Spidermonkey was beating v8 on their own benchmarks

      November 3rd, 2014 at 04:56

      1. dom

        At SOME of their “own” benchmarks

        November 3rd, 2014 at 05:49

        1. nemo

          I’m guessing Anique is referring to:
          http://robert.ocallahan.org/2014/10/are-we-fast-yet-yes-we-are.html

          In the extensive discussion on HN, this thread seemed to reflect what dom was saying about “own”

          https://news.ycombinator.com/item?id=8519182

          But on almost all the JS benchmarks I can find, Mozilla seems to be doing very very well, I *am* curious how it would do in Node.js

          November 3rd, 2014 at 09:17

          1. njn

            Node’s implementation is intimately tied to the APIs provided by V8. There have been some attempts at getting SpiderMonkey to work with Node (search for “Spidernode”) but it’s a lot of work they never got very far.

            November 3rd, 2014 at 21:57

  2. nXqd

    @Edward: There was an implementation of mozilla engine on nodejs but it was dropped. If I remember correctly, there is no active ones for now :)

    November 3rd, 2014 at 06:52

  3. Ciro S. Costa

    Great article! Cool to see how asm is performing! Seems like the concept of JS as a compile target is getting each time closer to reality (wait, it already is, isnt?). Didn’t know about having the compilation phase off the main thread, that’s a huge thing!

    Is it fair to say that with Emscripten for generating the code and OdinMonkey for interpreting that specialized code we are somehow dealing with a ‘bytecode’-a-like thing but with javascript? Thanks!

    November 3rd, 2014 at 07:35

    1. Luke

      Asm.js should work with all browsers, now, as it’s just a small subset of JS. That’s the advantage compared to bytecode-based languages that require plugin or a specific browser.

      November 3rd, 2014 at 19:24

      1. Ciro S. Costa

        Hmm, thats fair (and also a great approach). Was thinking more in terms of the kind of “layer” that the compilation process introduces; maybe a bad metaphor.

        Another question .. will b2g take advantage of asm (if it is not already)? Seems like a big win for it if it can get a very optimized code for running html5 games and etc.

        November 4th, 2014 at 13:29

  4. Owen Densmore

    Not just cpu performance, but use of typed arrays makes a HUGE difference. Our team calls this area “Massive JavaScript”.

    We can handle multiple arrays of a million TA types with ease. We were mainly interested in storage efficiency .. but using a trivial test harness we were able to see also a fast.js-like improvement in speed filling the arrays and accessing them.
    http://backspaces.net/temp/memory.html

    It is *really* simple to use:
    1 – Go to the page and read the instructions
    2 – Open the console to see the results
    3 – Use the REST interface to try some of the examples, and create some of your own.

    November 5th, 2014 at 10:04

  5. Patrick Martin

    Just compile linpak to javascript :P

    November 5th, 2014 at 21:37

  6. Thodoris Greasidis

    It would be nice if this could be included to the main page of arewefastyet.com

    November 14th, 2014 at 09:57

    1. njn

      It’s not on the main page of arewefastyet.com, but if you click on the “Breakdown” link you can see results for each of the asm.js benchmarks individually.

      November 14th, 2014 at 13:37

Comments are closed for this article.