Mozilla

JavaScript Articles

Sort by:

View:

  1. saving data with localStorage

    This post was written by Jeff Balogh. Jeff works on Mozilla’s web development team.

    New in Firefox 3.5, localStorage is a part of the Web Storage specification. localStorage provides a simple Javascript API for persisting key-value pairs in the browser. It shouldn’t be confused with the SQL database storage proposal, which is a separate (and more contentious) part of the Web Storage spec. Key-value pairs could conceivably be stored in cookies, but you wouldn’t want to do that. Cookies are sent to the server with every request, presenting performance issues with large data sets and the potential for security problems, and you have to write your own interface for treating cookies like a database.

    Here’s a small demo that stores the content of a textarea in localStorage. You can change the text, open a new tab, and find your updated content. Or you can restart the browser and your text will still be there.


    The easiest way to use localStorage is to treat it like a regular object:

    >>> localStorage.foo = 'bar'
    >>> localStorage.foo
    "bar"
    >>> localStorage.length
    1
    >>> localStorage[0]
    "foo"
    >>> localStorage['foo']
    "bar"
    >>> delete localStorage['foo']
    >>> localStorage.length
    0
    >>> localStorage.not_set
    null

    There’s also a more wordy API for people who like that sort of thing:

    >>> localStorage.clear()
    >>> localStorage.setItem('foo', 'bar')
    >>> localStorage.getItem('foo')
    "bar"
    >>> localStorage.key(0)
    "foo"
    >>> localStorage.removeItem('foo')
    >>> localStorage.length
    0

    If you want to have a localStorage database mapped to the current session, you can use sessionStorage. It has the same interface as localStorage, but the lifetime of sessionStorage is limited to the current browser window. You can follow links around the site in the same window and sessionStorage will be maintained (going to different sites is fine too), but once that window is closed the database will be deleted. localStorage is for long-term storage, as the w3c spec instructs browsers to consider the data “potentially user-critical”.

    I was a tad disappointed when I found out that localStorage only supports storing strings, since I was hoping for something more structured. But with native JSON support it’s easy to create an object store on top of localStorage:

    Storage.prototype.setObject = function(key, value) {
        this.setItem(key, JSON.stringify(value));
    }
     
    Storage.prototype.getObject = function(key) {
        return JSON.parse(this.getItem(key));
    }

    localStorage databases are scoped to an HTML5 origin, basically the tuple (scheme, host, port). This means that the database is shared across all pages on the same domain, even concurrently by multiple browser tabs. However, a page connecting over http:// cannot see a database that was created during an https:// session.

    localStorage and sessionStorage are supported by Firefox 3.5, Safari 4.0, and IE8. You can find more compatibility details on quirksmode.org, including more detail on the storage event.

  2. using HTML5 video with fallbacks to other formats

    The Mozilla Support Project and support.mozilla.com (SUMO for short) is an open and volunteer powered community that helps over 3 million Firefox users a week get support and help with their favorite browser. The Firefox support community maintains a knowledge base with articles in over 30 languages and works directly with users through our support forums or live chat service. They’ve put together the following demonstration of how to use open video and Flash-based video at the same time to provide embedded videos to users regardless of their browser. This demo article was written by Laura Thomson, Cheng Wang and Eric Cooper

    Note: This article shows how to add video objects to a page using JavaScript. For most pages we suggest that you read the article that contains information on how to use the video tag to provide simple markup fallbacks. Markup-based fallbacks are much more elegant than JavaScript solutions and are generally recommended for use on the web.

    One of the challenges to open video adoption on the web is making sure that online video still performs well on browsers that don’t currently support open video.  Rather than asking users with these browsers to download the video file and use a separate viewer, the new <video> tag degrades gracefully to allow web developers to provide a good experience for everyone.  As Firefox 3.5 upgrades the web, users in this transitional period can be shown open video or video using an existing plugin in an entirely seamless way depending on what their browser supports.

    At SUMO, we use this system to provide screencasts of problem-solving steps such as in the article on how to make Firefox the default browser.

    If you visit the page using Firefox 3.5, or another browser with open video support, and click on the “Watch a video of these instructions” link, you get a screencast using an Ogg-formatted file.  If you visit with a browser that does not support <video> you get the exact same user experience using an open source .flv player and the same video encoded in the .flv format, or in some cases using the SWF Flash format.  These alternate viewing methods use the virtually ubiquitous Adobe Flash plugin which is one of the most common ways to show video on the web.

    The code works as follows.

    In the page which contains the screencasts, we include some JavaScript.   Excerpts from this code follow, but you can see or check out the complete listing from Mozilla SVN.

    The code begins by setting up an object to represent the player:

    Screencasts.Player = {
    width: 640,
    height: 480,
    thumbnails: [],
    priority: { 'ogg': 1, 'swf': 2, 'flv': 3 },
    handlers: { 'swf': 'Swf', 'flv': 'Flash', 'ogg': 'Ogg' },
    properNouns: { 'swf': 'Flash', 'flv': 'Flash', 'ogg': 'Ogg Theora' },
    canUseVideo: false,
    isThumb: false,
    thumbWidth: 160,
    thumbHeight: 120
    };

    We allocate a priority to each of the possible video formats.  You’ll notice we also have the 'canUseVideo' attribute, which defaults to false.

    Later on in the code, we test the user’s browser to see if it is video-capable:

    var detect = document.createElement('video');
    if (typeof detect.canPlayType === 'function' &&
        detect.canPlayType('video/ogg;codecs="theora,vorbis"') == 'probably' ) {
          Screencasts.Player.canUseVideo = true;
          Screencasts.Player.priority.ogg =
              Screencasts.Player.priority.flv + 2
    }

    If we can create a video element and it indicates that it can play the Ogg Theora format we set canUseVideo to true and increase the priority of the Ogg file to be greater than the priority of the .flv file. (Note that you could also detect if the browser could play .mp4 files to support Safari out of the box.)

    Finally, we use the priority to select which file is actually played, by iterating through the list of files to find the one that has the highest priority:

    for (var x=0; x < file.length; x++) {
      if (!best ) {
        best = file[x];
      } else {
        if (this.priority[best] < this.priority[file[x]])
          best = file[x];
      }
    }

    With these parts in place, the browser displays only the highest priority video and in a format that it can handle.

    If you want to learn more about the Mozilla Support Project or get involved in helping Firefox users, check out their guide to getting started.

    Resources

    * Note: To view this demo using Safari 4 on Mac OSX, you will need to add Ogg support to Quicktime using the Xiph Quicktime plugin available from http://www.xiph.org/quicktime/download.html

  3. WebIDE, Storage inspector, jQuery events, iframe switcher + more – Firefox Developer Tools Episode 34

    A new set of Firefox Developer Tools features has just been uplifted to the Aurora channel. These features are available right now in Aurora, and will be in the Firefox 34 release in November. This release brings new tools (storage inspector, WebIDE), an updated profiler, and handy enhancements to the existing tools:

    WebIDE

    WebIDE, a new tool for in-browser app development, has been enabled by default in this release. WebIDE lets you create a new Firefox OS app (which is just a web app) from a template, or open up the code for an already created app. From there you can edit the app’s files. It’s one click to run the app in a simulator and one more to debug it with the developer tools. Open WebIDE from Firefox’s “Web Developer” menu. (docs)

    Storage inspector

    There’s a new panel that shows the data your page has stored in cookies, localStorage, sessionStorage, and IndexedDB, which was created mostly by Girish Shama. Enable the Storage panel by checking off Settings > “Default Developer Tools” > “Storage”. The panel is read-only right now, with editing ability planned for a future release. (docs) (development notes) (UserVoice request)

    storage inspector

    jQuery events

    The event listener popup in the Inspector now supports jQuery. This means the popup will display the function you attached with e.g. jQuery.on(), and not the jQuery wrapper function itself. See this post for more info and how to add support for your preferred framework. (development notes)

    JQuery events

    Iframe switcher

    Change the frame you’re debugging using the new frame selection menu. Selecting a frame will switch all of the tools to debug that iframe, including the Inspector, Console, and Debugger. Add the frame selection button by checking off Settings > “Available Toolbox Buttons” > “Select an iframe”. (docs) (development notes)(UserVoice request)

    iframe selection

    Updated profiler

    An updated JavaScript profiler appears in the new “Performance” tab (formerly the “Profiler” tab). New to the profiler are a frame rate timeline and categories for frames like “network” and “graphics”. (docs) (development notes)

    new profiler

    console.table()

    Add a call to console.table() anywhere in your JavaScript to log data to the console using a table-like display. Log any object, array, Map, or Set. Sort a column in the table by clicking on its header. (docs) (development notes)

    console.table

    Selector preview

    Hover over a CSS selector in the Inspector or Style Editor to highlight all the nodes that match that selector on the page. (development notes)

    selector previews

    Other mentions

    • Persistent split console – The split console (opened by pressing ESC) will now open with the tools if you had it open the last time the tools were closed. (development notes)
    • Web audio – AudioParam connections – the Web Audio Editor now displays connections from AudioNodes to AudioParams. (development notes)

    Special thanks to the 41 contributors that added all the features and fixes in this release.

    Comment here, shoot feedback to @FirefoxDevTools on Twitter, or propose changes on the Developer Tools feedback channel. If you’d like to help out, check out the guide to getting involved.

  4. Black Box Driven Development in JavaScript

    Sooner or later every developer finds the beauty of the design patterns. Also, sooner or later the developer finds that most of the patterns are not applicable in their pure format. Very often we use variations. We change the well-known definitions to fit in our use cases. I know that we (the programmers) like buzzwords. Here is a new one – Black Box Driven Development or simply BBDD. I started applying the concept before a couple of months, and I could say that the results are promising. After finishing several projects, I started seeing the good practices and formed three principles.

    What is a black box?

    Before to go with the principles of the BBDD let’s see what is meant by a black box. According to Wikipedia:

    In science and engineering, a black box is a device, system or object which can be viewed in terms of its input, output and transfer characteristics without any knowledge of its internal workings.

    In programming, every piece of code that accepts input, performs actions and returns an output could be considered as a black box. In JavaScript, we could apply the concept easily by using a function. For example:

    var Box = function(a, b) {
        var result = a + b;
        return result;
    }

    This is the simplest version of a BBDD unit. It is a box that performs an operation and returns output immediately. However, very often we need something else. We need continuous interaction with the box. This is another kind of box that I use to call living black box.

    var Box = function(a, b) {
        var api = {
            calculate: function() {
                return a + b;
            }
        };
        return api;
    }

    We have an API containing all the public functions of the box. It is identical to the revealing module pattern. The most important characteristic of this pattern is that it brings encapsulation. We have a clear separation of the public and private objects.

    Now that we know what a black box is, let’s check out the three principles of BBDD.

    Principle 1: Modulize everything

    Every piece of logic should exist as an independent module. In other words – a black box. In the beginning of the development cycle it is a little bit difficult to recognize these pieces. Spending too much time in architecting the application without having even a line of code may not produce good results. The approach that works involves coding. We should sketch the application and even make part of it. Once we have something we could start thinking about black boxing it. It is also much easier to jump into the code and make something without thinking if it is wrong or right. The key is to refactor the implementation till you feel that it is good enough.

    Let’s take the following example:

    $(document).ready(function() {
        if(window.localStorage) {
            var products = window.localStorage.getItem('products') || [], content = '';
            for(var i=0; i<products.length; i++) {
                content += products[i].name + '<br />';
            }
            $('.content').html(content);
        } else {
            $('.error').css('display', 'block');
            $('.error').html('Error! Local storage is not supported.')
        }
    });

    We are getting an array called products from the local storage of the browser. If the browser does not support local storage, then we show a simple error message.

    The code as it is, is fine, and it works. However, there are several responsibilities that are merged into a single function. The first optimization that we have to do is to form a good entry point of our code. Sending just a newly defined closure to the $(document).ready is not flexible. What if we want to delay the execution of our initial code or run it in a different way. The snippet above could be transformed to the following:

    var App = function() {
        var api = {};
        api.init = function() {
            if(window.localStorage) {
                var products = window.localStorage.getItem('products') || [], content = '';
                for(var i=0; i<products.length; i++) {
                    content += products[i].name + '<br />';
                }
                $('.content').html(content);
            } else {
                $('.error').css('display', 'block');
                $('.error').html('Error! Local storage is not supported.');
            }
            return api;
        }
        return api;
    }
     
    var application = App();
    $(document).ready(application.init);

    Now, we have better control over the bootstrapping.

    The source of our data at the moment is the local storage of the browser. However, we may need to fetch the products from a database or simply use a mock-up. It makes sense to extract this part of the code:

    var Storage = function() {
        var api = {};
        api.exists = function() {
            return !!window && !!window.localStorage;
        };
        api.get = function() {
            return window.localStorage.getItem('products') || [];
        }
        return api;
    }

    We have two other operations that could form another box – setting HTML content and show an element. Let’s create a module that will handle the DOM interaction.

    var DOM = function(selector) {
        var api = {}, el;
        var element = function() {
            if(!el) {
                el = $(selector);
                if(el.length == 0) {
                    throw new Error('There is no element matching "' + selector + '".');
                }
            }
            return el;
        }
        api.content = function(html) {
            element().html(html);
            return api;
        }
        api.show = function() {
            element().css('display', 'block');
            return api;
        }
        return api;
    }

    The code is doing the same thing as in the first version. However, we have a test function element that checks if the passed selector matches anything in the DOM tree. We are also black boxing the jQuery element that makes our code much more flexible. Imagine that we decide to remove jQuery. The DOM operations are hidden in this module. It is worth nothing to edit it and start using vanilla JavaScript for example or some other library. If we stay with the old variant, we will probably go through the whole code base replacing code snippets.

    Here is the transformed script. A new version that uses the modules that we’ve created above:

    var App = function() {
        var api = {},
            storage = Storage(),
            c = DOM('.content'),
            e = DOM('.error');
        api.init = function() {
            if(storage.exists()) {
                var products = storage.get(), content = '';
                for(var i=0; i<products.length; i++) {
                    content += products[i].name + '<br />';
                }
                c.content(content);
            } else {
                e.content('Error! Local storage is not supported.').show();
            }
            return api;
        }
        return api;
    }

    Notice that we have separation of responsibilities. We have objects that play roles. It is easier and much more interesting to work with such codebase.

    Principle 2: Expose only public methods

    What makes the black box valuable is the fact that it hides the complexity. The programmer should expose only methods (or properties) that are needed. All the other functions that are used for internal processes should be private.

    Let’s get the DOM module above:

    var DOM = function(selector) {
        var api = {}, el;
        var element = function() {}
        api.content = function(html) {}
        api.show = function() {}
        return api;
    }

    When a developer uses our class, he is interested in two things – changing the content and showing a DOM element. He should not think about validations or change CSS properties. In our example, there are private variable el and private function element. They are hidden from the outside world.

    Principle 3: Use composition over inheritance

    One of the popular ways to inherit classes in JavaScript uses the prototype chain. In the following snippet, we have class A that is inherited by class C:

    function A(){};
    A.prototype.someMethod = function(){};
     
    function C(){};
    C.prototype = new A();
    C.prototype.constructor = C;

    However, if we use the revealing module pattern, it makes sense to use composition. It is because we are dealing with objects and not functions (* in fact the functions in JavaScript are also objects). Let’s say that we have a box that implements the observer pattern, and we want to extend it.

    var Observer = function() {
        var api = {}, listeners = {};
        api.on = function(event, handler) {};
        api.off = function(event, handler) {};
        api.dispatch = function(event) {};
        return api;
    }
     
    var Logic = function() {
        var api = Observer();
        api.customMethod = function() {};
        return api;
    }

    We get the required functionality by assigning an initial value to the api variable. We should notice that every class that uses this technique receives a brand new observer object so there is no way to produce collisions.

    Summary

    Black box driven development is a nice way to architect your applications. It provides encapsulation and flexibility. BBDD comes with a simple module definition that helps organizing big projects (and teams). I saw how several developers worked on the same project, and they all built their black boxes independently.

  5. Capturing – Improving Performance of the Adaptive Web

    Responsive design is now widely regarded as the dominant approach to building new websites. With good reason, too: a responsive design workflow is the most efficient way to build tailored visual experiences for different device screen sizes and resolutions.

    Responsive design, however, is only the tip of the iceberg when it comes to creating a rich, engaging mobile experience.

    Image Source: For a Future-Friendly Web by Brad Frost

    The issue of performance with responsive websites

    Performance is one of the most important features of a website, but is also frequently overlooked. Performance is something that many developers struggle with – in order to create high-performing websites you need to spend a lot of time tuning your site’s backend. Even more time is required to understand how browsers work, so that you make rendering pages as fast as possible.

    When it comes to creating responsive websites, the performance challenges are even more difficult because you have a single set of markup that is meant to be consumed by all kinds of devices. One problem you hit is the responsive image problem – how do you ensure that big images intended for your Retina Macbook Pro are not downloaded on an old Android phone? How do you prevent desktop ads from rendering on small screen devices?

    It’s easy to overlook performance as a problem because we often conduct testing under perfect conditions – using a fast computer, fast internet, and close proximity to our servers. Just to give you an idea of how evident this problem is, we conducted an analysis into some top responsive e-commerce sites which revealed that the average responsive site home page consists of 87.2 resources and is made up of 1.9 MB of data.

    It is possible to solve the responsive performance problem by making the necessary adjustments to your website manually, but performance tuning by hand involves both complexity and repetition, and that makes it a great candidate for creating tools. With Capturing, we intend to make creating high-performing adaptive web experiences as easy as possible.

    Introducing Capturing

    Capturing is a client-side API we’ve developed to give developers complete control over the DOM before any resources have started loading. With responsive sites, it is a challenge to control what resources you want to load based on the conditions of the device: all current solutions require you to make significant changes to your existing site by either using server-side user-agent detection, or by forcing you to break semantic web standards (for example, changing the src attribute to data-src).

    Our approach to give you resource control is done by capturing the source markup before it has a chance to be parsed by the browser, and then reconstructing the document with resources disabled.

    The ability to control resources client-side gives you an unprecedented amount of control over the performance of your website.

    Capturing was a key feature of Mobify.js 1.1, our framework for creating mobile and tablet websites using client-side templating. We have since reworked Mobify.js in our 2.0 release to be a much more modular library that can be used in any existing website, with Capturing as the primary focus.

    A solution to the responsive image problem

    One way people have been tackling the responsive image problem is by modifying existing backend markup, changing the src of all their img elements to something like data-src, and accompanying that change with a <noscript> fallback. The reason this is done is discussed in this CSS-Tricks post

    “a src that points to an image of a horse will start downloading as soon as that image gets parsed by the browser. There is no practical way to prevent this.

    With Capturing, this is no longer true.

    Say, for example, you had an img element that you want to modify for devices with Retina screens, but you didn’t want the original image in the src attribute to load. Using Capturing, you could do something like this:

    if (window.devicePixelRatio && window.devicePixelRatio >= 2) {
        var bannerImg = capturedDoc.getElementById("banner");
        bannerImg.src = "retinaBanner.png"
    }

    Because we have access to the DOM before any resources are loaded, we can swap the src of images on the fly before they are downloaded. The latter example is very basic – a better example to highlight the power of capturing it to demonstrate a perfect implementation of the picture polyfill.

    Picture Polyfill

    The Picture element is the official W3C HTML extension for dealing with adaptive images. There are polyfills that exist in order to use the Picture element in your site today, but none of them are able to do a perfect polyfill – the best polyfill implemented thus far requires a <noscript> tag surrounding an img element in order to support browsers without Javascript. Using Capturing, you can avoid this madness completely.

    Open the example and be sure to fire up the network tab in web inspector to see which resources get downloaded:

    Here is the important chunk of code that is in the source of the example:

    <picture>
        <source src="/examples/assets/images/small.jpg">
        <source src="/examples/assets/images/medium.jpg" media="(min-width: 450px)">
        <source src="/examples/assets/images/large.jpg" media="(min-width: 800px)">
        <source src="/examples/assets/images/extralarge.jpg" media="(min-width: 1000px)">
        <img src="/examples/assets/images/small.jpg">
    </picture>

    Take note that there is an img element that uses a src attribute, but the browser only downloads the correct image. You can see the code for this example here (note that the polyfill is only available in the example, not the library itself – yet):

    Not all sites use modified src attributes and <noscript> tags to solve the responsive image problem. An alternative, if you don’t want to rely on modifying src or adding <noscript> tags for every image of your site, is to use server-side detection in order to swap out images, scripts, and other content. Unfortunately, this solution comes with a lot of challenges.

    It was easy to use server-side user-agent detection when the only device you needed to worry about was the iPhone, but with the amount of new devices rolling out, keeping a dictionary of all devices containing information about their screen width, device pixel ratio, and more is a very painful task; not to mention there are certain things you cannot detect with server-side user-agent – such as actual network bandwidth.

    What else can you do with Capturing?

    Solving the responsive image problem is a great use-case for Capturing, but there are also many more. Here’s a few more interesting examples:

    Media queries in markup to control resource loading

    In this example, we use media queries in attributes on images and scripts to determine which ones will load, just to give you an idea of what you can do with Capturing. This example can be found here:

    Complete re-writing of a page using templating

    The primary function of Mobify.js 1.1 was client-side templating to completely rewrite the pages of your existing site when responsive doesn’t offer enough flexibility, or when changing the backend is simply too painful and tedious. It is particularly helpful when you need a mobile presence, fast. This is no longer the primary function of Mobify.js, but it still possible using Capturing.

    Check out this basic example:

    In this example, we’ve taken parts of the existing page and used them in a completely new markup rendered to browser.

    Fill your page with grumpy cats

    And of course, there is nothing more useful then replacing all the images in a page with grumpy cats! In a high-performing way, of course ;-).

    Once again, open up web inspector to see that the original images on the site did not download.

    Performance

    So what’s the catch? Is there a performance penalty to using Capturing? Yes, there is, but we feel the performance gains you can make by controlling your resources outweigh the minor penalty that Capturing brings. On first load, the library (and main executable if not concatenated together), must download and execute, and the load time here will vary depending on the round trip latency of the device (ranges from around ~60ms to ~300ms). However, the penalty of every subsequent request will be reduced by at least half due to the library being cached, and the just-in-time (JIT) compiler making the compilation much more efficient. You can run the test yourself!

    We also do our best to keep the size of the library to a minimum – at the time of publishing this blog post, the library is 4KB minified and gzipped.

    Why should you use Capturing?

    We created Capturing to give more control of performance to developers on the front-end. The reason other solutions fail to solve this problem is because the responsibilities of the front-end and backend have become increasingly intertwined. The backend’s responsibility should be to generate semantic web markup, and it should be the front-end’s responsibility to take the markup from the backend and processes it in such a way that it is best visually represented on the device, and in a high-performing way. Responsive design solves the first issue (visually representing data), and Capturing helps solve the next (increasing performance on websites by using front-end techniques such as determining screen size and bandwidth to control resource loading).

    If you want to continue to obey the laws of the semantic web, and if you want an easy way to control performance at the front-end, we highly recommend that you check out Mobify.js 2.0!

    How can I get started using Capturing?

    Head over to our quick start guide for instructions on how to get setup using Capturing.

    What’s next?

    We’ve begun with an official developer preview of Mobify.js 2.0, which includes just the Capturing portion, but we will be adding more and more useful features.

    The next feature on the list to add is automatic resizing of images, allowing you to dynamically download images based on the size of the browser window without the need to modify your existing markup (aside from inserting a small javascript snippet)!

    We also plan to create other polyfills that can only be solved with Capturing, such as the new HTML5 Template Tag, for example.

    We look forward to your feedback, and we are excited to see what other developers will do with our new Mobify.js 2.0 library!

  6. Firefox 4 Performance

    Dave Mandelin from the JS team and Joe Drew from the Graphics team summarize the key performance improvements in Firefox 4.

    The web wants fast browsers. Cutting-edge HTML5 web pages play games, mash up and share maps, sound, and videos, show spreadsheets and presentations, and edit photos. Only a high-performance browser can do that. What the web wants, it’s our job to make, and we’ve been working hard to make Firefox 4 fast.

    Firefox 4 comes with performance improvements in almost every area. The most dramatic improvements are in JavaScript and graphics, which are critical for modern HTML5 apps and games. In the rest of this article, we’ll profile the key performance technologies and show how they make the web that much “more awesomer”.

    Fast JavaScript: Uncaging the JägerMonkey
    JavaScript is the programming language of the web, powering most of the dynamic content and behavior, so fast JavaScript is critical for rich apps and games. Firefox 4 gets fast JavaScript from a beast we call JägerMonkey. In techno-gobbledygook, JägerMonkey is a multi-architecture per-method JavaScript JIT compiler with 64-bit NaN-boxing, inline caching, and register allocation. Let’s break that down:

      Multi-architecture
      JägerMonkey has full support for x86, x64, and ARM processors, so we’re fast on both traditional computers and mobile devices. W00t!
      (Crunchy technical stuff ahead: if you don’t care how it works, skip the rest of the sections.)

      Per-method JavaScript JIT compilation

      The basic idea of JägerMonkey is to translate (compile) JavaScript to machine code, “just in time” (JIT). JIT-compiling JavaScript isn’t new: previous versions of Firefox feature the TraceMonkey JIT, which can generate very fast machine code. But some programs can’t be “jitted” by TraceMonkey. JägerMonkey has a simpler design that is able to compile everything in exchange for not doing quite as much optimization. But it’s still fast. And TraceMonkey is still there, to provide a turbo boost when it can.

      64-bit NaN-boxing
      That’s the technical name for the new 64-bit formats the JavaScript engine uses to represent program values. These formats are designed to help the JIT compilers and tuned for modern hardware. For example, think about floating-point numbers, which are 64 bits. With the old 32-bit value formats, floating-point calculations required the engine to allocate, read, write, and deallocate extra memory, all of which is slow, especially now that processors are much faster than memory. With the new 64-bit formats, no extra memory is required, and calculations are much faster. If you want to know more, see the technical article Mozilla’s new JavaScript value representation.
      Inline caching
      Property accesses, like o.p, are common in JavaScript. Without special help from the engine, they are complicated, and thus slow: first the engine has to search the object and its prototypes for the property, next find out where the value is stored, and only then read the value. The idea behind inline caching is: “What if we could skip all that other junk and just read the value?” Here’s how it works: The engine assigns every object a shape that describes its prototype and properties. At first, the JIT generates machine code for o.p that gets the property by laborious searching. But once that code runs, the JITs finds out what o‘s shape is and how to get the property. The JIT then generates specialized machine code that simply verifies that the shape is the same and gets the property. For the rest of the program, that o.p runs about as fast as possible. See the technical article PICing on JavaScript for fun and profit for much more about inline caching.

      Register allocation
      Code generated by basic JITs spends a lot of time reading and writing memory: for code like x+y, the machine code first reads x, then reads y, adds them, and then writes the result to temporary storage. With 64-bit values, that’s up to 6 memory accesses. A more advanced JIT, such as JägerMonkey, generates code that tries to hold most values in registers. JägerMonkey also does some related optimizations, like trying to avoid storing values at all when they are constant or just a copy of some other value.

    Here’s what JägerMonkey does to our benchmark scores:

    That’s more than 3x improvement on SunSpider and Kraken and more than 6x on V8!

    Fast Graphics: GPU-powered browsing.
    For Firefox 4, we sped up how Firefox draws and composites web pages using the Graphics Processing Unit (GPU) in most modern computers.

    On Windows Vista and Windows 7, all web pages are hardware accelerated using Direct2D . This provides a great speedup for many complex web sites and demo pages.

    On Windows and Mac, Firefox uses 3D frameworks (Direct3D or OpenGL) to accelerate the composition of web page elements. This same technique is also used to accelerate the display of HTML5 video .

    Final take
    Fast, hardware-accelerated graphics combined plus fast JavaScript means cutting-edge HTML5 games, demos, and apps run great in Firefox 4. You see it on some of the sites we enjoyed making fast. There’s plenty more to try in the Mozilla Labs Gaming entries and of course, be sure to check out the Web O’ Wonder.

  7. a quick note on JavaScript engine components

    There have been a bunch of posts about the JägerMonkey (JM) post that we made the other day, some of which get things subtly wrong about the pieces of technology that are being used as part of Mozilla’s JM work. So here’s the super-quick overview of what we’re using, what the various parts do and where they came from:

    1. SpiderMonkey.This is Mozilla’s core JavaScript Interpreter. This engine takes raw JavaScript and turns it into an intermediate bytecode. That bytecode is then interpreted. SpiderMonkey was responsible for all JavaScript handling in Firefox 3 and earlier. We continue to make improvements to this engine, as it’s still the basis for a lot of work that we did in Firefox 3.5, 3.6 and later releases as well.

    2. Tracing. Tracing was added before Firefox 3.5 and was responsible for much of the big jump that we made in performance. (Although some of that was because we also improved the underlying SpiderMonkey engine as well.)

    This is what we do to trace:

    1. Monitor interpreted JavaScript code during execution looking for code paths that are used more than once.
    2. When we find a piece of code that’s used more than once, optimize that code.
    3. Take that optimized representation and assemble it to machine code and execute it.

    What we’ve found since Firefox 3.5 is that when we’re in full tracing mode, we’re really really fast. We’re slow when we have to “fall back” to SpiderMonkey and interpret + record.

    One difficult part of tracing is generating code that runs fast. This is done by a piece of code called Nanojit. Nanojit is a piece of code that was originally part of the Tamarin project. Mozilla isn’t using most of Tamarin for two reasons: 1. we’re not shipping ECMAScript 4 and 2. the interpreted part of Tamarin was much slower than SpiderMonkey. For Firefox 3.5 we took the best part – Nanojit – and bolted it to the back of SpiderMonkey instead.

    Nanojit does two things: it takes a high-level representation of JavaScript and does optimization. It also includes an assembler to take that optimized representation and generate native code for machine-level execution.

    Mozilla and Adobe continue to collaborate on Nanojit. Adobe uses Nanojit as part of their ActionScript VM.

    3. Nitro Assembler. This is a piece of code that we’re taking from Apple’s version of webkit that generates native code for execution. The Nitro Assembler is very different than Nanojit. While Nanojit takes a high-level representation, does optimization and then generates code all the Nitro Assembler does is generate code. So it’s complex, low-level code, but it doesn’t do the same set of things that Nanojit does.

    We’re using the Nitro assembler (along with a lot of other new code) to basically build what everyone else has – compiled JavaScript – and then we’re going to do what we did with Firefox 3.5 – bolt tracing onto the back of that. So we’ll hopefully have the best of all worlds: SpiderMonkey generating native code to execute like the other VMs with the ability to go onto trace for tight inner loops for even more performance.

    I hope this helps to explain what bits of technology we’re using and how they fit into the overall picture of Firefox’s JS performance.

  8. audio player – HTML5 style

    Last week we featured a demo from Alistair MacDonald (@F1LT3R) where he showed how to animate SVG with Canvas and a bunch of free tools. This week he has another demo for us that shows how you can use the new audio element in Firefox 3.5 with some canvas and JS to build a nice-looking audio player.

    But what’s really interesting about this demo is not so much that it plays audio – lots of people have built audio players – but how it works. If you look at the source code for the page what you’ll find looks something like this:

    <div id="jai">
      <canvas id="jai-transport" width="320" height="20"></canvas>
      <ul class="playlist">
        <li>
          <a href="@F1LT3R - Cryogenic Unrest.ogg">
            F1LT3R - Cryogenic Unrest
          </a>
          <audio src="@F1LT3R - Cryogenic Unrest.ogg"/>.
        <li>
          <a href="@F1LT3R - Ghosts in HyperSpace.ogg">
            F1LT3R - Ghosts in HyperSpace
          </a>
          <audio src="@F1LT3R - Ghosts in HyperSpace.ogg"/>.
      </ul>
    </div>
    (The actual list has fallbacks and is more compact – cleaned up here for easier reading.)

    That’s right – the player above is just a simple HTML unordered list that happens to include audio elements and is styled with CSS. You’ll notice that if you right click on one of them that it has all the usual items – save as, bookmark this link, copy this link location, etc. You can even poke at it with Firebug.

    The JavaScript driver that Al has written will look for a <div> element with the jai ID and then look for any audio elements that are inside it. It then will draw the playback interface in the canvas at the top of the list. The playback interface is built with simple JS canvas calls and an SVG-derived font.

    Using this driver it’s super-easy to add an audio player to any web site by just defining a canvas and a list. Much like what we’ve seen on a lot of the web with the rise of useful libraries like jQuery, this library can add additional value to easily-defined markup. Another win for HTML5 and the library model.

    Al has a much larger write-up on the same page as the demo. If you haven’t read through it you should now.

    (Also? Al wrote the music himself. So awesome.)

  9. ES6 In Depth: Modules

    ES6 In Depth is a series on new features being added to the JavaScript programming language in the 6th Edition of the ECMAScript standard, ES6 for short.

    When I started on Mozilla’s JavaScript team back in 2007, the joke was that the length of a typical JavaScript program was one line.

    This was two years after Google Maps launched. Not long before that, the predominant use of JavaScript had been form validation, and sure enough, your average <input onchange=> handler would be… one line of code.

    Things have changed. JavaScript projects have grown to jaw-dropping sizes, and the community has developed tools for working at scale. One of the most basic things you need is a module system, a way to spread your work across multiple files and directories—but still make sure all your bits of code can access one another as needed—but also be able to load all that code efficiently. So naturally, JavaScript has a module system. Several, actually. There are also several package managers, tools for installing all that software and coping with high-level dependencies. You might think ES6, with its new module syntax, is a little late to the party.

    Well, today we’ll see whether ES6 adds anything to these existing systems, and whether or not future standards and tools will be able to build on it. But first, let’s just dive in and see what ES6 modules look like.

    Module basics

    An ES6 module is a file containing JS code. There’s no special module keyword; a module mostly reads just like a script. There are two differences.

    • ES6 modules are automatically strict-mode code, even if you don’t write "use strict"; in them.

    • You can use import and export in modules.

    Let’s talk about export first. Everything declared inside a module is local to the module, by default. If you want something declared in a module to be public, so that other modules can use it, you must export that feature. There are a few ways to do this. The simplest way is to add the export keyword.

    // kittydar.js - Find the locations of all the cats in an image.
    // (Heather Arthur wrote this library for real)
    // (but she didn't use modules, because it was 2013)
    
    export function detectCats(canvas, options) {
      var kittydar = new Kittydar(options);
      return kittydar.detectCats(canvas);
    }
    
    export class Kittydar {
      ... several methods doing image processing ...
    }
    
    // This helper function isn't exported.
    function resizeCanvas() {
      ...
    }
    ...
    

    You can export any top-level function, class, var, let, or const.

    And that’s really all you need to know to write a module! You don’t have to put everything in an IIFE or a callback. Just go ahead and declare everything you need. Since the code is a module, not a script, all the declarations will be scoped to that module, not globally visible across all scripts and modules. Export the declarations that make up the module’s public API, and you’re done.

    Apart from exports, the code in a module is pretty much just normal code. It can use globals like Object and Array. If your module runs in a web browser, it can use document and XMLHttpRequest.

    In a separate file, we can import and use the detectCats() function:

    // demo.js - Kittydar demo program
    
    import {detectCats} from "kittydar.js";
    
    function go() {
        var canvas = document.getElementById("catpix");
        var cats = detectCats(canvas);
        drawRectangles(canvas, cats);
    }
    

    To import multiple names from a module, you would write:

    import {detectCats, Kittydar} from "kittydar.js";
    

    When you run a module containing an import declaration, the modules it imports are loaded first, then each module body is executed in a depth-first traversal of the dependency graph, avoiding cycles by skipping anything already executed.

    And those are the basics of modules. It’s really quite simple. ;-)

    Export lists

    Rather than tagging each exported feature, you can write out a single list of all the names you want to export, wrapped in curly braces:

    export {detectCats, Kittydar};
    
    // no `export` keyword required here
    function detectCats(canvas, options) { ... }
    class Kittydar { ... }
    

    An export list doesn’t have to be the first thing in the file; it can appear anywhere in a module file’s top-level scope. You can have multiple export lists, or mix export lists with other export declarations, as long as no name is exported more than once.

    Renaming imports and exports

    Once in a while, an imported name happens to collide with some other name that you also need to use. So ES6 lets you rename things when you import them:

    // suburbia.js
    
    // Both these modules export something named `flip`.
    // To import them both, we must rename at least one.
    import {flip as flipOmelet} from "eggs.js";
    import {flip as flipHouse} from "real-estate.js";
    ...
    

    Similarly, you can rename things when you export them. This is handy if you want to export the same value under two different names, which occasionally happens:

    // unlicensed_nuclear_accelerator.js - media streaming without drm
    // (not a real library, but maybe it should be)
    
    function v1() { ... }
    function v2() { ... }
    
    export {
      v1 as streamV1,
      v2 as streamV2,
      v2 as streamLatestVersion
    };
    

    Default exports

    The new standard is designed to interoperate with existing CommonJS and AMD modules. So suppose you have a Node project and you’ve done npm install lodash. Your ES6 code can import individual functions from Lodash:

    import {each, map} from "lodash";
    
    each([3, 2, 1], x => console.log(x));
    

    But perhaps you’ve gotten used to seeing _.each rather than each and you still want to write things that way. Or maybe you want to use _ as a function, since that’s a useful thing to do in Lodash.

    For that, you can use a slightly different syntax: import the module without curly braces.

    import _ from "lodash";
    

    This shorthand is equivalent to import {default as _} from "lodash";. All CommonJS and AMD modules are presented to ES6 as having a default export, which is the same thing that you would get if you asked require() for that module—that is, the exports object.

    ES6 modules were designed to let you export multiple things, but for existing CommonJS modules, the default export is all you get. For example, as of this writing, the famous colors package doesn’t have any special ES6 support as far as I can tell. It’s a collection of CommonJS modules, like most packages on npm. But you can import it right into your ES6 code.

    // ES6 equivalent of `var colors = require("colors/safe");`
    import colors from "colors/safe";
    

    If you’d like your own ES6 module to have a default export, that’s easy to do. There’s nothing magic about a default export; it’s just like any other export, except it’s named "default". You can use the renaming syntax we already talked about:

    let myObject = {
      field1: value1,
      field2: value2
    };
    export {myObject as default};
    

    Or better yet, use this shorthand:

    export default {
      field1: value1,
      field2: value2
    };
    

    The keywords export default can be followed by any value: a function, a class, an object literal, you name it.

    Module objects

    Sorry this is so long. But JavaScript is not alone: for some reason, module systems in all languages tend to have a ton of individually small, boring convenience features. Fortunately, there’s just one thing left. Well, two things.

    import * as cows from "cows";
    

    When you import *, what’s imported is a module namespace object. Its properties are the module’s exports. So if the “cows” module exports a function named moo(), then after importing “cows” this way, you can write: cows.moo().

    Aggregating modules

    Sometimes the main module of a package is little more than importing all the package’s other modules and exporting them in a unified way. To simplify this kind of code, there’s an all-in-one import-and-export shorthand:

    // world-foods.js - good stuff from all over
    
    // import "sri-lanka" and re-export some of its exports
    export {Tea, Cinnamon} from "sri-lanka";
    
    // import "equatorial-guinea" and re-export some of its exports
    export {Coffee, Cocoa} from "equatorial-guinea";
    
    // import "singapore" and export ALL of its exports
    export * from "singapore";
    

    Each one of these export-from statements is similar to an import-from statement followed by an export. Unlike a real import, this doesn’t add the re-exported bindings to your scope. So don’t use this shorthand if you plan to write some code in world-foods.js that makes use of Tea. You’ll find that it’s not there.

    If any name exported by “singapore” happened to collide with the other exports, that would be an error, so use export * with care.

    Whew! We’re done with syntax! On to the interesting parts.

    What does import actually do?

    Would you believe… nothing?

    Oh, you’re not that gullible. Well, would you believe the standard mostly doesn’t say what import does? And that this is a good thing?

    ES6 leaves the details of module loading entirely up to the implementation. The rest of module execution is specified in detail.

    Roughly speaking, when you tell the JS engine to run a module, it has to behave as though these four steps are happening:

    1. Parsing: The implementation reads the source code of the module and checks for syntax errors.

    2. Loading: The implementation loads all imported modules (recursively). This is the part that isn’t standardized yet.

    3. Linking: For each newly loaded module, the implementation creates a module scope and fills it with all the bindings declared in that module, including things imported from other modules.

      This is the part where if you try to import {cake} from "paleo", but the “paleo” module doesn’t actually export anything named cake, you’ll get an error. And that’s too bad, because you were so close to actually running some JS code. And having cake!

    4. Run time: Finally, the implementation runs the statements in the body of each newly-loaded module. By this time, import processing is already finished, so when execution reaches a line of code where there’s an import declaration… nothing happens!

    See? I told you the answer was “nothing”. I don’t lie about programming languages.

    But now we get to the fun part of this system. There’s a cool trick. Because the system doesn’t specify how loading works, and because you can figure out all the dependencies ahead of time by looking at the import declarations in the source code, an implementation of ES6 is free to do all the work at compile time and bundle all your modules into a single file to ship them over the network! And tools like webpack actually do this.

    This is a big deal, because loading scripts over the network takes time, and every time you fetch one, you may find that it contains import declarations that require you to load dozens more. A naive loader would require a lot of network round trips. But with webpack, not only can you use ES6 with modules today, you get all the software engineering benefits with no run-time performance hit.

    A detailed specification of module loading in ES6 was originally planned—and built. One reason it isn’t in the final standard is that there wasn’t consensus on how to achieve this bundling feature. I hope someone figures it out, because as we’ll see, module loading really should be standardized. And bundling is too good to give up.

    Static vs. dynamic, or: rules and how to break them

    For a dynamic language, JavaScript has gotten itself a surprisingly static module system.

    • All flavors of import and export are allowed only at toplevel in a module. There are no conditional imports or exports, and you can’t use import in function scope.

    • All exported identifiers must be explicitly exported by name in the source code. You can’t programmatically loop through an array and export a bunch of names in a data-driven way.

    • Module objects are frozen. There is no way to hack a new feature into a module object, polyfill style.

    • All of a module’s dependencies must be loaded, parsed, and linked eagerly, before any module code runs. There’s no syntax for an import that can be loaded lazily, on demand.

    • There is no error recovery for import errors. An app may have hundreds of modules in it, and if anything fails to load or link, nothing runs. You can’t import in a try/catch block. (The upside here is that because the system is so static, webpack can detect those errors for you at compile time.)

    • There is no hook allowing a module to run some code before its dependencies load. This means that modules have no control over how their dependencies are loaded.

    The system is quite nice as long as your needs are static. But you can imagine needing a little hack sometimes, right?

    That’s why whatever module-loading system you use will have a programmatic API to go alongside ES6’s static import/export syntax. For example, webpack includes an API that you can use for “code splitting”, loading some bundles of modules lazily on demand. The same API can help you break most of the other rules listed above.

    The ES6 module syntax is very static, and that’s good—it’s paying off in the form of powerful compile-time tools. But the static syntax was designed to work alongside a rich dynamic, programmatic loader API.

    When can I use ES6 modules?

    To use modules today, you’ll need a compiler such as Traceur or Babel. Earlier in this series, Gastón I. Silva showed how to use Babel and Broccoli to compile ES6 code for the web; building on that article, Gastón has a working example with support for ES6 modules. This post by Axel Rauschmayer contains an example using Babel and webpack.

    The ES6 module system was designed mainly by Dave Herman and Sam Tobin-Hochstadt, who defended the static parts of the system against all comers (including me) through years of controversy. Jon Coppeard is implementing modules in Firefox. Additional work on a JavaScript Loader Standard is underway. Work to add something like <script type=module> to HTML is expected to follow.

    And that’s ES6.

    This has been so much fun that I don’t want it to end. Maybe we should do just one more episode. We could talk about odds and ends in the ES6 spec that weren’t big enough to merit their own article. And maybe a little bit about what the future holds. Please join me next week for the stunning conclusion of ES6 In Depth.

  10. Performance with JavaScript String Objects

    This article aims to take a look at the performance of JavaScript engines towards primitive value Strings and Object Strings. It is a showcase of benchmarks related to the excellent article by Kiro Risk, The Wrapper Object. Before proceeding, I would suggest visiting Kiro’s page first as an introduction to this topic.

    The ECMAScript 5.1 Language Specification (PDF link) states at paragraph 4.3.18 about the String object:

    String object member of the Object type that is an instance of the standard built-in String constructor

    NOTE A String object is created by using the String constructor in a new expression, supplying a String value as an argument.
    The resulting object has an internal property whose value is the String value. A String object can be coerced to a String value
    by calling the String constructor as a function (15.5.1).

    and David Flanagan’s great book “JavaScript: The Definitive Guide”, very meticulously describes the Wrapper Objects at section 3.6:

    Strings are not objects, though, so why do they have properties? Whenever you try to refer to a property of a string s, JavaScript converts the string value to an object as if by calling new String(s). […] Once the property has been resolved, the newly created object is discarded. (Implementations are not required to actually create and discard this transient object: they must behave as if they do, however.)

    It is important to note the text in bold above. Basically, the different ways a new String object is created are implementation specific. As such, an obvious question one could ask is “since a primitive value String must be coerced to a String Object when trying to access a property, for example str.length, would it be faster if instead we had declared the variable as String Object?”. In other words, could declaring a variable as a String Object, ie var str = new String("hello"), rather than as a primitive value String, ie var str = "hello" potentially save the JS engine from having to create a new String Object on the fly so as to access its properties?

    Those who deal with the implementation of ECMAScript standards to JS engines already know the answer, but it’s worth having a deeper look at the common suggestion “Do not create numbers or strings using the ‘new’ operator”.

    Our showcase and objective

    For our showcase, we will use mainly Firefox and Chrome; the results, though, would be similar if we chose any other web browser, as we are focusing not on a speed comparison between two different browser engines, but at a speed comparison between two different versions of the source code on each browser (one version having a primitive value string, and the other a String Object). In addition, we are interested in how the same cases compare in speed to subsequent versions of the same browser. The first sample of benchmarks was collected on the same machine, and then other machines with a different OS/hardware specs were added in order to validate the speed numbers.

    The scenario

    For the benchmarks, the case is rather simple; we declare two string variables, one as a primitive value string and the other as an Object String, both of which have the same value:

      var strprimitive = "Hello";
      var strobject    = new String("Hello");

    and then we perform the same kind of tasks on them. (notice that in the jsperf pages strprimitive = str1, and strobject = str2)

    1. length property

      var i = strprimitive.length;
      var k = strobject.length;

    If we assume that during runtime the wrapper object created from the primitive value string strprimitive, is treated equally with the object string strobject by the JavaScript engine in terms of performance, then we should expect to see the same latency while trying to access each variable’s length property. Yet, as we can see in the following bar chart, accessing the length property is a lot faster on the primitive value string strprimitive, than in the object string strobject.


    (Primitive value string vs Wrapper Object String – length, on jsPerf)

    Actually, on Chrome 24.0.1285 calling strprimitive.length is 2.5x faster than calling strobject.length, and on Firefox 17 it is about 2x faster (but having more operations per second). Consequently, we realize that the corresponding browser JavaScript engines apply some “short paths” to access the length property when dealing with primitive string values, with special code blocks for each case.

    In the SpiderMonkey JS engine, for example, the pseudo-code that deals with the “get property” operation looks something like the following:

      // direct check for the "length" property
      if (typeof(value) == "string" && property == "length") {
        return StringLength(value);
      }
      // generalized code form for properties
      object = ToObject(value);
      return InternalGetProperty(object, property);

    Thus, when you request a property on a string primitive, and the property name is “length”, the engine immediately just returns its length, avoiding the full property lookup as well as the temporary wrapper object creation. Unless we add a property/method to the String.prototype requesting |this|, like so:

      String.prototype.getThis = function () { return this; }
      console.log("hello".getThis());

    then no wrapper object will be created when accessing the String.prototype methods, as for example String.prototype.valueOf(). Each JS engine has embedded similar optimizations in order to produce faster results.

    2. charAt() method

      var i = strprimitive.charAt(0);
      var k = strobject["0"];


    (Primitive value string vs Wrapper Object String – charAt(), on jsPerf)

    This benchmark clearly verifies the previous statement, as we can see that getting the value of the first string character in Firefox 20 is substiantially faster in strprimitive than in strobject, about x70 times of increased performance. Similar results apply to other browsers as well, though at different speeds. Also, notice the differences between incremental Firefox versions; this is just another indicator of how small code variations can affect the JS engine’s speed for certain runtime calls.

    3. indexOf() method

      var i = strprimitive.indexOf("e");
      var k = strobject.indexOf("e");


    (Primitive value string vs Wrapper Object String – IndexOf(), on jsPerf)

    Similarly in this case, we can see that the primitive value string strprimitive can be used in more operations than strobject. In addition, the JS engine differences in sequential browser versions produce a variety of measurements.

    4. match() method

    Since there are similar results here too, to save some space, you can click the source link to view the benchmark.

    (Primitive value string vs Wrapper Object String – match(), on jsPerf)

    5. replace() method

    (Primitive value string vs Wrapper Object String – replace(), on jsPerf)

    6. toUpperCase() method

    (Primitive value string vs Wrapper Object String – toUpperCase(), on jsPerf)

    7. valueOf() method

      var i = strprimitive.valueOf();
      var k = strobject.valueOf();

    At this point it starts to get more interesting. So, what happens when we try to call the most common method of a string, it’s valueOf()? It seems like most browsers have a mechanism to determine whether it’s a primitive value string or an Object String, thus using a much faster way to get its value; surprizingly enough Firefox versions up to v20, seem to favour the Object String method call of strobject, with a 7x increased speed.


    (Primitive value string vs Wrapper Object String – valueOf(), on jsPerf)

    It’s also worth mentioning that Chrome 22.0.1229 seems to have favoured too the Object String, while in version 23.0.1271 a new way to get the content of primitive value strings has been implemented.

    A simpler way to run this benchmark in your browser’s console is described in the comment of the jsperf page.

    8. Adding two strings

      var i = strprimitive + " there";
      var k = strobject + " there";


    (Primitive string vs Wrapper Object String – get str value, on jsPerf)

    Let’s now try and add the two strings with a primitive value string. As the chart shows, both Firefox and Chrome present a 2.8x and 2x increased speed in favour of strprimitive, as compared with adding the Object string strobject with another string value.

    9. Adding two strings with valueOf()

      var i = strprimitive.valueOf() + " there";
      var k = strobject.valueOf() + " there";


    (Primitive string vs Wrapper Object String – str valueOf, on jsPerf)

    Here we can see again that Firefox favours the strobject.valueOf(), since for strprimitive.valueOf() it moves up the inheritance tree and consequently creates a new wapper object for strprimitive. The effect this chained way of events has on the performance can also be seen in the next case.

    10. for-in wrapper object

      var i = "";
      for (var temp in strprimitive) { i += strprimitive[temp]; }
     
      var k = "";
      for (var temp in strobject) { k += strobject[temp]; }

    This benchmark will incrementally construct the string’s value through a loop to another variable. In the for-in loop, the expression to be evaluated is normally an object, but if the expression is a primitive value, then this value gets coerced to its equivalent wrapper object. Of course, this is not a recommended method to get the value of a string, but it is one of the many ways a wrapper object can be created, and thus it is worth mentioning.


    (Primitive string vs Wrapper Object String – Properties, on jsPerf)

    As expected, Chrome seems to favour the primitive value string strprimitive, while Firefox and Safari seem to favour the object string strobject. In case this seems much typical, let’s move on the last benchmark.

    11. Adding two strings with an Object String

      var str3 = new String(" there");
     
      var i = strprimitive + str3;
      var k = strobject + str3;


    (Primitive string vs Wrapper Object String – 2 str values, on jsPerf)

    In the previous examples, we have seen that Firefox versions offer better performance if our initial string is an Object String, like strobject, and thus it would be seem normal to expect the same when adding strobject with another object string, which is basically the same thing. It is worth noticing, though, that when adding a string with an Object String, it’s actually quite faster in Firefox if we use strprimitive instead of strobject. This proves once more how source code variations, like a patch to a bug, lead to different benchmark numbers.

    Conclusion

    Based on the benchmarks described above, we have seen a number of ways about how subtle differences in our string declarations can produce a series of different performance results. It is recommended that you continue to declare your string variables as you normally do, unless there is a very specific reason for you to create instances of the String Object. Also, note that a browser’s overall performance, particularly when dealing with the DOM, is not only based on the page’s JS performance; there is a lot more in a browser than its JS engine.

    Feedback comments are much appreciated. Thanks :-)