Mozilla

JavaScript Articles

Sort by:

View:

  1. Capturing – Improving Performance of the Adaptive Web

    Responsive design is now widely regarded as the dominant approach to building new websites. With good reason, too: a responsive design workflow is the most efficient way to build tailored visual experiences for different device screen sizes and resolutions.

    Responsive design, however, is only the tip of the iceberg when it comes to creating a rich, engaging mobile experience.


    Image Source: For a Future-Friendly Web by Brad Frost

    The issue of performance with responsive websites

    Performance is one of the most important features of a website, but is also frequently overlooked. Performance is something that many developers struggle with – in order to create high-performing websites you need to spend a lot of time tuning your site’s backend. Even more time is required to understand how browsers work, so that you make rendering pages as fast as possible.

    When it comes to creating responsive websites, the performance challenges are even more difficult because you have a single set of markup that is meant to be consumed by all kinds of devices. One problem you hit is the responsive image problem – how do you ensure that big images intended for your Retina Macbook Pro are not downloaded on an old Android phone? How do you prevent desktop ads from rendering on small screen devices?

    It’s easy to overlook performance as a problem because we often conduct testing under perfect conditions – using a fast computer, fast internet, and close proximity to our servers. Just to give you an idea of how evident this problem is, we conducted an analysis into some top responsive e-commerce sites which revealed that the average responsive site home page consists of 87.2 resources and is made up of 1.9 MB of data.

    It is possible to solve the responsive performance problem by making the necessary adjustments to your website manually, but performance tuning by hand involves both complexity and repetition, and that makes it a great candidate for creating tools. With Capturing, we intend to make creating high-performing adaptive web experiences as easy as possible.

    Introducing Capturing

    Capturing is a client-side API we’ve developed to give developers complete control over the DOM before any resources have started loading. With responsive sites, it is a challenge to control what resources you want to load based on the conditions of the device: all current solutions require you to make significant changes to your existing site by either using server-side user-agent detection, or by forcing you to break semantic web standards (for example, changing the src attribute to data-src).

    Our approach to give you resource control is done by capturing the source markup before it has a chance to be parsed by the browser, and then reconstructing the document with resources disabled.

    The ability to control resources client-side gives you an unprecedented amount of control over the performance of your website.

    Capturing was a key feature of Mobify.js 1.1, our framework for creating mobile and tablet websites using client-side templating. We have since reworked Mobify.js in our 2.0 release to be a much more modular library that can be used in any existing website, with Capturing as the primary focus.

    A solution to the responsive image problem

    One way people have been tackling the responsive image problem is by modifying existing backend markup, changing the src of all their img elements to something like data-src, and accompanying that change with a <noscript> fallback. The reason this is done is discussed in this CSS-Tricks post

    “a src that points to an image of a horse will start downloading as soon as that image gets parsed by the browser. There is no practical way to prevent this.

    With Capturing, this is no longer true.

    Say, for example, you had an img element that you want to modify for devices with Retina screens, but you didn’t want the original image in the src attribute to load. Using Capturing, you could do something like this:

    if (window.devicePixelRatio && window.devicePixelRatio >= 2) {
        var bannerImg = capturedDoc.getElementById("banner");
        bannerImg.src = "retinaBanner.png"
    }

    Because we have access to the DOM before any resources are loaded, we can swap the src of images on the fly before they are downloaded. The latter example is very basic – a better example to highlight the power of capturing it to demonstrate a perfect implementation of the picture polyfill.

    Picture Polyfill

    The Picture element is the official W3C HTML extension for dealing with adaptive images. There are polyfills that exist in order to use the Picture element in your site today, but none of them are able to do a perfect polyfill – the best polyfill implemented thus far requires a <noscript> tag surrounding an img element in order to support browsers without Javascript. Using Capturing, you can avoid this madness completely.

    Open the example and be sure to fire up the network tab in web inspector to see which resources get downloaded:

    Here is the important chunk of code that is in the source of the example:

    <picture>
        <source src="/examples/assets/images/small.jpg">
        <source src="/examples/assets/images/medium.jpg" media="(min-width: 450px)">
        <source src="/examples/assets/images/large.jpg" media="(min-width: 800px)">
        <source src="/examples/assets/images/extralarge.jpg" media="(min-width: 1000px)">
        <img src="/examples/assets/images/small.jpg">
    </picture>

    Take note that there is an img element that uses a src attribute, but the browser only downloads the correct image. You can see the code for this example here (note that the polyfill is only available in the example, not the library itself – yet):

    Not all sites use modified src attributes and <noscript> tags to solve the responsive image problem. An alternative, if you don’t want to rely on modifying src or adding <noscript> tags for every image of your site, is to use server-side detection in order to swap out images, scripts, and other content. Unfortunately, this solution comes with a lot of challenges.

    It was easy to use server-side user-agent detection when the only device you needed to worry about was the iPhone, but with the amount of new devices rolling out, keeping a dictionary of all devices containing information about their screen width, device pixel ratio, and more is a very painful task; not to mention there are certain things you cannot detect with server-side user-agent – such as actual network bandwidth.

    What else can you do with Capturing?

    Solving the responsive image problem is a great use-case for Capturing, but there are also many more. Here’s a few more interesting examples:

    Media queries in markup to control resource loading

    In this example, we use media queries in attributes on images and scripts to determine which ones will load, just to give you an idea of what you can do with Capturing. This example can be found here:

    Complete re-writing of a page using templating

    The primary function of Mobify.js 1.1 was client-side templating to completely rewrite the pages of your existing site when responsive doesn’t offer enough flexibility, or when changing the backend is simply too painful and tedious. It is particularly helpful when you need a mobile presence, fast. This is no longer the primary function of Mobify.js, but it still possible using Capturing.

    Check out this basic example:

    In this example, we’ve taken parts of the existing page and used them in a completely new markup rendered to browser.

    Fill your page with grumpy cats

    And of course, there is nothing more useful then replacing all the images in a page with grumpy cats! In a high-performing way, of course ;-).

    Once again, open up web inspector to see that the original images on the site did not download.

    Performance

    So what’s the catch? Is there a performance penalty to using Capturing? Yes, there is, but we feel the performance gains you can make by controlling your resources outweigh the minor penalty that Capturing brings. On first load, the library (and main executable if not concatenated together), must download and execute, and the load time here will vary depending on the round trip latency of the device (ranges from around ~60ms to ~300ms). However, the penalty of every subsequent request will be reduced by at least half due to the library being cached, and the just-in-time (JIT) compiler making the compilation much more efficient. You can run the test yourself!

    We also do our best to keep the size of the library to a minimum – at the time of publishing this blog post, the library is 4KB minified and gzipped.

    Why should you use Capturing?

    We created Capturing to give more control of performance to developers on the front-end. The reason other solutions fail to solve this problem is because the responsibilities of the front-end and backend have become increasingly intertwined. The backend’s responsibility should be to generate semantic web markup, and it should be the front-end’s responsibility to take the markup from the backend and processes it in such a way that it is best visually represented on the device, and in a high-performing way. Responsive design solves the first issue (visually representing data), and Capturing helps solve the next (increasing performance on websites by using front-end techniques such as determining screen size and bandwidth to control resource loading).

    If you want to continue to obey the laws of the semantic web, and if you want an easy way to control performance at the front-end, we highly recommend that you check out Mobify.js 2.0!

    How can I get started using Capturing?

    Head over to our quick start guide for instructions on how to get setup using Capturing.

    What’s next?

    We’ve begun with an official developer preview of Mobify.js 2.0, which includes just the Capturing portion, but we will be adding more and more useful features.

    The next feature on the list to add is automatic resizing of images, allowing you to dynamically download images based on the size of the browser window without the need to modify your existing markup (aside from inserting a small javascript snippet)!

    We also plan to create other polyfills that can only be solved with Capturing, such as the new HTML5 Template Tag, for example.

    We look forward to your feedback, and we are excited to see what other developers will do with our new Mobify.js 2.0 library!

  2. Firefox 4 Performance

    Dave Mandelin from the JS team and Joe Drew from the Graphics team summarize the key performance improvements in Firefox 4.

    The web wants fast browsers. Cutting-edge HTML5 web pages play games, mash up and share maps, sound, and videos, show spreadsheets and presentations, and edit photos. Only a high-performance browser can do that. What the web wants, it’s our job to make, and we’ve been working hard to make Firefox 4 fast.

    Firefox 4 comes with performance improvements in almost every area. The most dramatic improvements are in JavaScript and graphics, which are critical for modern HTML5 apps and games. In the rest of this article, we’ll profile the key performance technologies and show how they make the web that much “more awesomer”.

    Fast JavaScript: Uncaging the JägerMonkey
    JavaScript is the programming language of the web, powering most of the dynamic content and behavior, so fast JavaScript is critical for rich apps and games. Firefox 4 gets fast JavaScript from a beast we call JägerMonkey. In techno-gobbledygook, JägerMonkey is a multi-architecture per-method JavaScript JIT compiler with 64-bit NaN-boxing, inline caching, and register allocation. Let’s break that down:

      Multi-architecture
      JägerMonkey has full support for x86, x64, and ARM processors, so we’re fast on both traditional computers and mobile devices. W00t!
      (Crunchy technical stuff ahead: if you don’t care how it works, skip the rest of the sections.)

      Per-method JavaScript JIT compilation

      The basic idea of JägerMonkey is to translate (compile) JavaScript to machine code, “just in time” (JIT). JIT-compiling JavaScript isn’t new: previous versions of Firefox feature the TraceMonkey JIT, which can generate very fast machine code. But some programs can’t be “jitted” by TraceMonkey. JägerMonkey has a simpler design that is able to compile everything in exchange for not doing quite as much optimization. But it’s still fast. And TraceMonkey is still there, to provide a turbo boost when it can.

      64-bit NaN-boxing
      That’s the technical name for the new 64-bit formats the JavaScript engine uses to represent program values. These formats are designed to help the JIT compilers and tuned for modern hardware. For example, think about floating-point numbers, which are 64 bits. With the old 32-bit value formats, floating-point calculations required the engine to allocate, read, write, and deallocate extra memory, all of which is slow, especially now that processors are much faster than memory. With the new 64-bit formats, no extra memory is required, and calculations are much faster. If you want to know more, see the technical article Mozilla’s new JavaScript value representation.
      Inline caching
      Property accesses, like o.p, are common in JavaScript. Without special help from the engine, they are complicated, and thus slow: first the engine has to search the object and its prototypes for the property, next find out where the value is stored, and only then read the value. The idea behind inline caching is: “What if we could skip all that other junk and just read the value?” Here’s how it works: The engine assigns every object a shape that describes its prototype and properties. At first, the JIT generates machine code for o.p that gets the property by laborious searching. But once that code runs, the JITs finds out what o's shape is and how to get the property. The JIT then generates specialized machine code that simply verifies that the shape is the same and gets the property. For the rest of the program, that o.p runs about as fast as possible. See the technical article PICing on JavaScript for fun and profit for much more about inline caching.

      Register allocation
      Code generated by basic JITs spends a lot of time reading and writing memory: for code like x+y, the machine code first reads x, then reads y, adds them, and then writes the result to temporary storage. With 64-bit values, that's up to 6 memory accesses. A more advanced JIT, such as JägerMonkey, generates code that tries to hold most values in registers. JägerMonkey also does some related optimizations, like trying to avoid storing values at all when they are constant or just a copy of some other value.

    Here's what JägerMonkey does to our benchmark scores:

    That's more than 3x improvement on SunSpider and Kraken and more than 6x on V8!

    Fast Graphics: GPU-powered browsing.
    For Firefox 4, we sped up how Firefox draws and composites web pages using the Graphics Processing Unit (GPU) in most modern computers.

    On Windows Vista and Windows 7, all web pages are hardware accelerated using Direct2D . This provides a great speedup for many complex web sites and demo pages.

    On Windows and Mac, Firefox uses 3D frameworks (Direct3D or OpenGL) to accelerate the composition of web page elements. This same technique is also used to accelerate the display of HTML5 video .

    Final take
    Fast, hardware-accelerated graphics combined plus fast JavaScript means cutting-edge HTML5 games, demos, and apps run great in Firefox 4. You see it on some of the sites we enjoyed making fast. There's plenty more to try in the Mozilla Labs Gaming entries and of course, be sure to check out the Web O' Wonder.

  3. a quick note on JavaScript engine components

    There have been a bunch of posts about the JägerMonkey (JM) post that we made the other day, some of which get things subtly wrong about the pieces of technology that are being used as part of Mozilla’s JM work. So here’s the super-quick overview of what we’re using, what the various parts do and where they came from:

    1. SpiderMonkey.This is Mozilla’s core JavaScript Interpreter. This engine takes raw JavaScript and turns it into an intermediate bytecode. That bytecode is then interpreted. SpiderMonkey was responsible for all JavaScript handling in Firefox 3 and earlier. We continue to make improvements to this engine, as it’s still the basis for a lot of work that we did in Firefox 3.5, 3.6 and later releases as well.

    2. Tracing. Tracing was added before Firefox 3.5 and was responsible for much of the big jump that we made in performance. (Although some of that was because we also improved the underlying SpiderMonkey engine as well.)

    This is what we do to trace:

    1. Monitor interpreted JavaScript code during execution looking for code paths that are used more than once.
    2. When we find a piece of code that’s used more than once, optimize that code.
    3. Take that optimized representation and assemble it to machine code and execute it.

    What we’ve found since Firefox 3.5 is that when we’re in full tracing mode, we’re really really fast. We’re slow when we have to “fall back” to SpiderMonkey and interpret + record.

    One difficult part of tracing is generating code that runs fast. This is done by a piece of code called Nanojit. Nanojit is a piece of code that was originally part of the Tamarin project. Mozilla isn’t using most of Tamarin for two reasons: 1. we’re not shipping ECMAScript 4 and 2. the interpreted part of Tamarin was much slower than SpiderMonkey. For Firefox 3.5 we took the best part – Nanojit – and bolted it to the back of SpiderMonkey instead.

    Nanojit does two things: it takes a high-level representation of JavaScript and does optimization. It also includes an assembler to take that optimized representation and generate native code for machine-level execution.

    Mozilla and Adobe continue to collaborate on Nanojit. Adobe uses Nanojit as part of their ActionScript VM.

    3. Nitro Assembler. This is a piece of code that we’re taking from Apple’s version of webkit that generates native code for execution. The Nitro Assembler is very different than Nanojit. While Nanojit takes a high-level representation, does optimization and then generates code all the Nitro Assembler does is generate code. So it’s complex, low-level code, but it doesn’t do the same set of things that Nanojit does.

    We’re using the Nitro assembler (along with a lot of other new code) to basically build what everyone else has – compiled JavaScript – and then we’re going to do what we did with Firefox 3.5 – bolt tracing onto the back of that. So we’ll hopefully have the best of all worlds: SpiderMonkey generating native code to execute like the other VMs with the ability to go onto trace for tight inner loops for even more performance.

    I hope this helps to explain what bits of technology we’re using and how they fit into the overall picture of Firefox’s JS performance.

  4. audio player – HTML5 style

    Last week we featured a demo from Alistair MacDonald (@F1LT3R) where he showed how to animate SVG with Canvas and a bunch of free tools. This week he has another demo for us that shows how you can use the new audio element in Firefox 3.5 with some canvas and JS to build a nice-looking audio player.

    But what’s really interesting about this demo is not so much that it plays audio – lots of people have built audio players – but how it works. If you look at the source code for the page what you’ll find looks something like this:

    <div id="jai">
      <canvas id="jai-transport" width="320" height="20"></canvas>
      <ul class="playlist">
        <li>
          <a href="@F1LT3R - Cryogenic Unrest.ogg">
            F1LT3R - Cryogenic Unrest
          </a>
          <audio src="@F1LT3R - Cryogenic Unrest.ogg"/>.
        <li>
          <a href="@F1LT3R - Ghosts in HyperSpace.ogg">
            F1LT3R - Ghosts in HyperSpace
          </a>
          <audio src="@F1LT3R - Ghosts in HyperSpace.ogg"/>.       
      </ul>    
    </div>
    (The actual list has fallbacks and is more compact – cleaned up here for easier reading.)

    That’s right – the player above is just a simple HTML unordered list that happens to include audio elements and is styled with CSS. You’ll notice that if you right click on one of them that it has all the usual items – save as, bookmark this link, copy this link location, etc. You can even poke at it with Firebug.

    The JavaScript driver that Al has written will look for a <div> element with the jai ID and then look for any audio elements that are inside it. It then will draw the playback interface in the canvas at the top of the list. The playback interface is built with simple JS canvas calls and an SVG-derived font.

    Using this driver it’s super-easy to add an audio player to any web site by just defining a canvas and a list. Much like what we’ve seen on a lot of the web with the rise of useful libraries like jQuery, this library can add additional value to easily-defined markup. Another win for HTML5 and the library model.

    Al has a much larger write-up on the same page as the demo. If you haven’t read through it you should now.

    (Also? Al wrote the music himself. So awesome.)

  5. Performance with JavaScript String Objects

    This article aims to take a look at the performance of JavaScript engines towards primitive value Strings and Object Strings. It is a showcase of benchmarks related to the excellent article by Kiro Risk, The Wrapper Object. Before proceeding, I would suggest visiting Kiro’s page first as an introduction to this topic.

    The ECMAScript 5.1 Language Specification (PDF link) states at paragraph 4.3.18 about the String object:

    String object member of the Object type that is an instance of the standard built-in String constructor

    NOTE A String object is created by using the String constructor in a new expression, supplying a String value as an argument.
    The resulting object has an internal property whose value is the String value. A String object can be coerced to a String value
    by calling the String constructor as a function (15.5.1).

    and David Flanagan’s great book “JavaScript: The Definitive Guide”, very meticulously describes the Wrapper Objects at section 3.6:

    Strings are not objects, though, so why do they have properties? Whenever you try to refer to a property of a string s, JavaScript converts the string value to an object as if by calling new String(s). [...] Once the property has been resolved, the newly created object is discarded. (Implementations are not required to actually create and discard this transient object: they must behave as if they do, however.)

    It is important to note the text in bold above. Basically, the different ways a new String object is created are implementation specific. As such, an obvious question one could ask is “since a primitive value String must be coerced to a String Object when trying to access a property, for example str.length, would it be faster if instead we had declared the variable as String Object?”. In other words, could declaring a variable as a String Object, ie var str = new String("hello"), rather than as a primitive value String, ie var str = "hello" potentially save the JS engine from having to create a new String Object on the fly so as to access its properties?

    Those who deal with the implementation of ECMAScript standards to JS engines already know the answer, but it’s worth having a deeper look at the common suggestion “Do not create numbers or strings using the ‘new’ operator”.

    Our showcase and objective

    For our showcase, we will use mainly Firefox and Chrome; the results, though, would be similar if we chose any other web browser, as we are focusing not on a speed comparison between two different browser engines, but at a speed comparison between two different versions of the source code on each browser (one version having a primitive value string, and the other a String Object). In addition, we are interested in how the same cases compare in speed to subsequent versions of the same browser. The first sample of benchmarks was collected on the same machine, and then other machines with a different OS/hardware specs were added in order to validate the speed numbers.

    The scenario

    For the benchmarks, the case is rather simple; we declare two string variables, one as a primitive value string and the other as an Object String, both of which have the same value:

      var strprimitive = "Hello";
      var strobject    = new String("Hello");

    and then we perform the same kind of tasks on them. (notice that in the jsperf pages strprimitive = str1, and strobject = str2)

    1. length property

      var i = strprimitive.length;
      var k = strobject.length;

    If we assume that during runtime the wrapper object created from the primitive value string strprimitive, is treated equally with the object string strobject by the JavaScript engine in terms of performance, then we should expect to see the same latency while trying to access each variable’s length property. Yet, as we can see in the following bar chart, accessing the length property is a lot faster on the primitive value string strprimitive, than in the object string strobject.


    (Primitive value string vs Wrapper Object String – length, on jsPerf)

    Actually, on Chrome 24.0.1285 calling strprimitive.length is 2.5x faster than calling strobject.length, and on Firefox 17 it is about 2x faster (but having more operations per second). Consequently, we realize that the corresponding browser JavaScript engines apply some “short paths” to access the length property when dealing with primitive string values, with special code blocks for each case.

    In the SpiderMonkey JS engine, for example, the pseudo-code that deals with the “get property” operation looks something like the following:

      // direct check for the "length" property
      if (typeof(value) == "string" && property == "length") {
        return StringLength(value);
      }
      // generalized code form for properties
      object = ToObject(value);
      return InternalGetProperty(object, property);

    Thus, when you request a property on a string primitive, and the property name is “length”, the engine immediately just returns its length, avoiding the full property lookup as well as the temporary wrapper object creation. Unless we add a property/method to the String.prototype requesting |this|, like so:

      String.prototype.getThis = function () { return this; }
      console.log("hello".getThis());

    then no wrapper object will be created when accessing the String.prototype methods, as for example String.prototype.valueOf(). Each JS engine has embedded similar optimizations in order to produce faster results.

    2. charAt() method

      var i = strprimitive.charAt(0);
      var k = strobject["0"];


    (Primitive value string vs Wrapper Object String – charAt(), on jsPerf)

    This benchmark clearly verifies the previous statement, as we can see that getting the value of the first string character in Firefox 20 is substiantially faster in strprimitive than in strobject, about x70 times of increased performance. Similar results apply to other browsers as well, though at different speeds. Also, notice the differences between incremental Firefox versions; this is just another indicator of how small code variations can affect the JS engine’s speed for certain runtime calls.

    3. indexOf() method

      var i = strprimitive.indexOf("e");
      var k = strobject.indexOf("e");


    (Primitive value string vs Wrapper Object String – IndexOf(), on jsPerf)

    Similarly in this case, we can see that the primitive value string strprimitive can be used in more operations than strobject. In addition, the JS engine differences in sequential browser versions produce a variety of measurements.

    4. match() method

    Since there are similar results here too, to save some space, you can click the source link to view the benchmark.

    (Primitive value string vs Wrapper Object String – match(), on jsPerf)

    5. replace() method

    (Primitive value string vs Wrapper Object String – replace(), on jsPerf)

    6. toUpperCase() method

    (Primitive value string vs Wrapper Object String – toUpperCase(), on jsPerf)

    7. valueOf() method

      var i = strprimitive.valueOf();
      var k = strobject.valueOf();

    At this point it starts to get more interesting. So, what happens when we try to call the most common method of a string, it’s valueOf()? It seems like most browsers have a mechanism to determine whether it’s a primitive value string or an Object String, thus using a much faster way to get its value; surprizingly enough Firefox versions up to v20, seem to favour the Object String method call of strobject, with a 7x increased speed.


    (Primitive value string vs Wrapper Object String – valueOf(), on jsPerf)

    It’s also worth mentioning that Chrome 22.0.1229 seems to have favoured too the Object String, while in version 23.0.1271 a new way to get the content of primitive value strings has been implemented.

    A simpler way to run this benchmark in your browser’s console is described in the comment of the jsperf page.

    8. Adding two strings

      var i = strprimitive + " there";
      var k = strobject + " there";


    (Primitive string vs Wrapper Object String – get str value, on jsPerf)

    Let’s now try and add the two strings with a primitive value string. As the chart shows, both Firefox and Chrome present a 2.8x and 2x increased speed in favour of strprimitive, as compared with adding the Object string strobject with another string value.

    9. Adding two strings with valueOf()

      var i = strprimitive.valueOf() + " there";
      var k = strobject.valueOf() + " there";


    (Primitive string vs Wrapper Object String – str valueOf, on jsPerf)

    Here we can see again that Firefox favours the strobject.valueOf(), since for strprimitive.valueOf() it moves up the inheritance tree and consequently creates a new wapper object for strprimitive. The effect this chained way of events has on the performance can also be seen in the next case.

    10. for-in wrapper object

      var i = "";
      for (var temp in strprimitive) { i += strprimitive[temp]; }
     
      var k = "";
      for (var temp in strobject) { k += strobject[temp]; }

    This benchmark will incrementally construct the string’s value through a loop to another variable. In the for-in loop, the expression to be evaluated is normally an object, but if the expression is a primitive value, then this value gets coerced to its equivalent wrapper object. Of course, this is not a recommended method to get the value of a string, but it is one of the many ways a wrapper object can be created, and thus it is worth mentioning.


    (Primitive string vs Wrapper Object String – Properties, on jsPerf)

    As expected, Chrome seems to favour the primitive value string strprimitive, while Firefox and Safari seem to favour the object string strobject. In case this seems much typical, let’s move on the last benchmark.

    11. Adding two strings with an Object String

      var str3 = new String(" there");
     
      var i = strprimitive + str3;
      var k = strobject + str3;


    (Primitive string vs Wrapper Object String – 2 str values, on jsPerf)

    In the previous examples, we have seen that Firefox versions offer better performance if our initial string is an Object String, like strobject, and thus it would be seem normal to expect the same when adding strobject with another object string, which is basically the same thing. It is worth noticing, though, that when adding a string with an Object String, it’s actually quite faster in Firefox if we use strprimitive instead of strobject. This proves once more how source code variations, like a patch to a bug, lead to different benchmark numbers.

    Conclusion

    Based on the benchmarks described above, we have seen a number of ways about how subtle differences in our string declarations can produce a series of different performance results. It is recommended that you continue to declare your string variables as you normally do, unless there is a very specific reason for you to create instances of the String Object. Also, note that a browser’s overall performance, particularly when dealing with the DOM, is not only based on the page’s JS performance; there is a lot more in a browser than its JS engine.

    Feedback comments are much appreciated. Thanks :-)

  6. Old tricks for new browsers – a talk at jQuery UK 2012

    Last Friday around 300 developers went to Oxford, England to attend jQuery UK and learn about all that is hot and new about their favourite JavaScript library. Imagine their surprise when I went on stage to tell them that a lot of what jQuery is used for these days doesn’t need it. If you want to learn more about the talk itself, there is a detailed report, slides and the audio recording available.

    The point I was making is that libraries like jQuery were first and foremost there to give us a level playing field as developers. We should not have to know the quirks of every browser and this is where using a library allows us to concentrate on the task at hand and not on how it will fail in 10 year old browsers.

    jQuery’s revolutionary new way of looking at web design was based on two main things: accessing the document via CSS selectors rather than the unwieldy DOM methods and chaining of JavaScript commands. jQuery then continued to make event handling and Ajax interactions easier and implemented the Easing equations to allow for slick and beautiful animations.

    However, this simplicity came with a prize: developers seem to forget a few very simple techniques that allow you to write very terse and simple to understand JavaScripts that don’t rely on jQuery. Amongst others, the most powerful ones are event delegation and assigning classes to parent elements and leave the main work to CSS.

    Event delegation

    Event Delegation means that instead of applying an event handler to each of the child elements in an element, you assign one handler to the parent element and let the browser do the rest for you. Events bubble up the DOM of a document and happen on the element you want to get and each of its parent elements. That way all you have to do is to compare with the target of the event to get the one you want to access. Say you have a to-do list in your document. All the HTML you need is:

    <ul id="todo">
      <li>Go round Mum's</li>
      <li>Get Liz back</li>
      <li>Sort life out!</li>
    </ul>

    In order to add event handlers to these list items, in jQuery beginners are tempted to do a $('#todo li').click(function(ev){...}); or – even worse – add a class to each list item and then access these. If you use event delegation all you need in JavaScript is:

    document.querySelector('#todo').addEventListener( 'click', 
      function( ev ) {
        var t = ev.target;
        if ( t.tagName === 'LI' ) {
          alert( t + t.innerHTML ); 
          ev.preventDefault();
        }
    }, false);

    Newer browsers have a querySelector and querySelectorAll method (see support here) that gives you access to DOM elements via CSS selectors – something we learned from jQuery. We use this here to access the to-do list. Then we apply an event listener for click to the list.

    We read out which element has been clicked with ev.target and compare its tagName to LI (this property is always uppercase). This means we will never execute the rest of the code when the user for example clicks on the list itself. We call preventDefault() to tell the browser not to do anything – we now take over.

    You can try this out in this fiddle or embedded below:

    JSFiddle demo.

    The benefits of event delegation is that you can now add new items without having to ever re-assign handlers. As the main click handler is on the list new items automatically will be added to the functionality. Try it out in this fiddle or embedded below:

    JSFiddle demo.

    Leaving styling and DOM traversal to CSS

    Another big use case of jQuery is to access a lot of elements at once and change their styling by manipulating their styles collection with the jQuery css() method. This is seemingly handy but is also annoying as you put styling information in your JavaScript. What if there is a rebranding later on? Where do people find the colours to change? It is a much simpler to add a class to the element in question and leave the rest to CSS. If you think about it, a lot of times we repeat the same CSS selectors in jQuery and the style document. Seems redundant.

    Adding and removing classes in the past was a bit of a nightmare. The way to do it was using the className property of a DOM element which contained a string. It was then up to you to find if a certain class name is in that string and to remove and add classes by adding to or using replace() on the string. Again, browsers learned from jQuery and now have a classList object (support here) that allows easy manipulation of CSS classes applied to elements. You have add(), remove(), toggle() and contains() to play with.

    This makes it dead easy to style a lot of elements and to single them out for different styling. Let’s say for example we have a content area and want to show one at a time. It is tempting to loop over the elements and do a lot of comparison, but all we really need is to assign classes and leave the rest to CSS. Say our content is a navigation pointing to articles. This works in all browsers:

    <header>
      <h1>Profit plans</h1>
    </header>
    <section id="content">
      <nav id="nav">
        <ul>
          <li><a href="#step1">Step 1: Collect Underpants</a></li>
          <li><a href="#step2">Step 2: ???</a></li>
          <li><a href="#step3">Step 3: Profit!</a></li>
        </ul>
      </nav>
      <article id="step1">
        <header><h1>Step 1: Collect Underpants</h1></header>
        <section>
          <p>
            Make sure Tweek doesn't expect anything, then steal underwear 
            and bring it to the mine.
          </p>
        </section>
        <footer><a href="#nav">back to top</a></footer>
      </article>
      <article id="step2">
        <header><h1>Step 2: ???</h1></header>
        <section>
          <p>WIP</p>
        </section>
        <footer><a href="#nav">back to top</a></footer>
      </article>
      <article id="step3">
        <header><h1>Step 3: Profit</h1></header>
        <section>
          <p>Yes, profit will come. Let's sing the underpants gnome song.</p>
        </section>
        <footer><a href="#nav">back to top</a></footer>
      </article>
    </section>

    Now in order to hide all the articles, all we do is assign a ‘js’ class to the body of the document and store the first link and first article in the content section in variables. We assign a class called ‘current’ to each of those.

    /* grab all the elements we need */
    var nav = document.querySelector( '#nav' ),
        content = document.querySelector( '#content' ),
     
    /* grab the first article and the first link */
        article = document.querySelector( '#content article' ),
        link = document.querySelector( '#nav a' );
     
    /* hide everything by applying a class called 'js' to the body */
    document.body.classList.add( 'js' );
     
    /* show the current article and link */ 
    article.classList.add( 'current' );
    link.classList.add( 'current' );

    Together with a simple CSS, this hides them all off screen:

    /* change content to be a content panel */
    .js #content {
      position: relative;
      overflow: hidden;
      min-height: 300px;
    }
     
    /* push all the articles up */
    .js #content article {
      position: absolute;
      top: -700px;
      left: 250px;
    }
    /* hide 'back to top' links */
    .js article footer {
      position: absolute;
      left: -20000px;
    }

    In this case we move the articles up. We also hide the “back to top” links as they are redundant when we hide and show the articles. To show and hide the articles all we need to do is assign a class called “current” to the one we want to show that overrides the original styling. In this case we move the article down again.

    /* keep the current article visible */
    .js #content article.current {
      top: 0;
    }

    In order to achieve that all we need to do is a simple event delegation on the navigation:

    /* event delegation for the navigation */
    nav.addEventListener( 'click', function( ev ) {
      var t = ev.target;
      if ( t.tagName === 'A' ) {
        /* remove old styles */
        link.classList.remove( 'current' );
        article.classList.remove( 'current' );
        /* get the new active link and article */
        link = t;
        article = document.querySelector( link.getAttribute( 'href' ) );
        /* show them by assigning the current class */
        link.classList.add( 'current' );
        article.classList.add( 'current' );
      }
    }, false);

    The simplicity here lies in the fact that the links already point to the elements with this IDs on them. So all we need to do is to read the href attribute of the link that was clicked.

    See the final result in this fiddle or embedded below.

    JSFiddle demo.

    Keeping the visuals in the CSS

    Mixed with CSS transitions or animations (support here), this can be made much smoother in a very simple way:

    .js #content article {
      position: absolute;
      top: -300px;
      left: 250px;
      -moz-transition: 1s;
      -webkit-transition: 1s;
      -ms-transition: 1s;
      -o-transition: 1s;
      transition: 1s;
    }

    The transition now simply goes smoothly in one second from the state without the ‘current’ class to the one with it. In our case, moving the article down. You can add more properties by editing the CSS – no need for more JavaScript. See the result in this fiddle or embedded below:

    JSFiddle demo.

    As we also toggle the current class on the link we can do more. It is simple to add visual extras like a “you are here” state by using CSS generated content with the :after selector (support here). That way you can add visual nice-to-haves without needing the generate HTML in JavaScript or resort to images.

    .js #nav a:hover:after, .js #nav a:focus:after, .js #nav a.current:after {
      content: '➭';
      position: absolute;
      right: 5px;
    }

    See the final result in this fiddle or embedded below:

    JSFiddle demo.

    The benefit of this technique is that we keep all the look and feel in CSS and make it much easier to maintain. And by using CSS transitions and animations you also leverage hardware acceleration.

    Give them a go, please?

    All of these things work across browsers we use these days and using polyfills can be made to work in old browsers, too. However, not everything is needed to be applied to old browsers. As web developers we should look ahead and not cater for outdated technology. If the things I showed above fall back to server-side solutions or page reloads in IE6, nobody is going to be the wiser. Let’s build escalator solutions – smooth when the tech works but still available as stairs when it doesn’t.

    Translations

  7. HTML5 drag and drop in Firefox 3.5

    This post is from Les Orchard, who works on Mozilla’s web development team.

    Introduction

    Drag and drop is one of the most fundamental interactions afforded by graphical user interfaces. In one gesture, it allows users to pair the selection of an object with the execution of an action, often including a second object in the operation. It’s a simple yet powerful UI concept used to support copying, list reordering, deletion (ala the Trash / Recycle Bin), and even the creation of link relationships.

    Since it’s so fundamental, offering drag and drop in web applications has been a no-brainer ever since browsers first offered mouse events in DHTML. But, although mousedown, mousemove, and mouseup made it possible, the implementation has been limited to the bounds of the browser window. Additionally, since these events refer only to the object being dragged, there’s a challenge to find the subject of the drop when the interaction is completed.

    Of course, that doesn’t prevent most modern JavaScript frameworks from abstracting away most of the problems and throwing in some flourishes while they’re at it. But, wouldn’t it be nice if browsers offered first-class support for drag and drop, and maybe even extended it beyond the window sandbox?

    As it turns out, this very wish is answered by the HTML 5 specification section on new drag-and-drop events, and Firefox 3.5 includes an implementation of those events.

    If you want to jump straight to the code, I’ve put together some simple demos of the new events.

    I’ve even scratched an itch of my own and built the beginnings of an outline editor, where every draggable element is also a drop target—of which there could be dozens to hundreds in a complex document, something that gave me some minor hair-tearing moments in the past while trying to make do with plain old mouse events.

    And, all the above can be downloaded or cloned from a GitHub repository I’ve created expecially for this article.

    The New Drag and Drop Events

    So, with no further ado, here are the new drag and drop events, in roughly the order you might expect to see them fired:

    dragstart
    A drag has been initiated, with the dragged element as the event target.
    drag
    The mouse has moved, with the dragged element as the event target.
    dragenter
    The dragged element has been moved into a drop listener, with the drop listener element as the event target.
    dragover
    The dragged element has been moved over a drop listener, with the drop listener element as the event target. Since the default behavior is to cancel drops, returning false or calling preventDefault() in the event handler indicates that a drop is allowed here.
    dragleave
    The dragged element has been moved out of a drop listener, with the drop listener element as the event target.
    drop
    The dragged element has been successfully dropped on a drop listener, with the drop listener element as the event target.
    dragend
    A drag has been ended, successfully or not, with the dragged element as the event target.

    Like the mouse events of yore, listeners can be attached to elements using addEventListener() directly or by way of your favorite JS library.

    Consider the following example using jQuery, also available as a live demo:

        <div id="newschool">
            <div class="dragme">Drag me!</div>
            <div class="drophere">Drop here!</div>
        </div>
     
        <script type="text/javascript">
            $(document).ready(function() {
                $('#newschool .dragme')
                    .attr('draggable', 'true')
                    .bind('dragstart', function(ev) {
                        var dt = ev.originalEvent.dataTransfer;
                        dt.setData("Text", "Dropped in zone!");
                        return true;
                    })
                    .bind('dragend', function(ev) {
                        return false;
                    });
                $('#newschool .drophere')
                    .bind('dragenter', function(ev) {
                        $(ev.target).addClass('dragover');
                        return false;
                    })
                    .bind('dragleave', function(ev) {
                        $(ev.target).removeClass('dragover');
                        return false;
                    })
                    .bind('dragover', function(ev) {
                        return false;
                    })
                    .bind('drop', function(ev) {
                        var dt = ev.originalEvent.dataTransfer;
                        alert(dt.getData('Text'));
                        return false;
                    });
            });
        </script>

    Thanks to the new events and jQuery, this example is both short and simple—but it packs in a lot of functionality, as the rest of this article will explain.

    Before moving on, there are at least three things about the above code that are worth mentioning:

    • Drop targets are enabled by virtue of having listeners for drop events. But, per the HTML 5 spec, draggable elements need an attribute of draggable="true", set either in markup or in JavaScript.

      Thus, $('#newschool .dragme').attr('draggable', 'true').

    • The original DOM event (as opposed to jQuery’s event wrapper) offers a property called dataTransfer. Beyond just manipulating elements, the new drag and drop events accomodate the transmission of user-definable data during the course of the interaction.
    • Since these are first-class events, you can apply the technique of Event Delegation.

      What’s that? Well, imagine you have a list of 1000 list items—as part of a deeply-nested outline document, for instance. Rather than needing to attach listeners or otherwise fiddle with all 1000 items, simply attach a listener to the parent node (eg. the <ul> element) and all events from the children will propagate up to the single parent listener. As a bonus, all new child elements added after page load will enjoy the same benefits.

      Check out this demo, and the associated JS code to see more about these events and Event Delegation.

    Using dataTransfer

    As mentioned in the last section, the new drag and drop events let you send data along with a dragged element. But, it’s even better than that: Your drop targets can receive data transferred by content objects dragged into the window from other browser windows, and even other applications.

    Since the example is a bit longer, check out the live demo and associated code to get an idea of what’s possible with dataTransfer.

    In a nutshell, the stars of this show are the setData() and getData() methods of the dataTransfer property exposed by the Event object.

    The setData() method is typically called in the dragstart listener, loading dataTransfer up with one or more strings of content with associated recommended content types.

    For illustration, here’s a quick snippet from the example code:

        var dt = ev.originalEvent.dataTransfer;    
        dt.setData('text/plain', $('#logo').parent().text());
        dt.setData('text/html', $('#logo').parent().html());
        dt.setData('text/uri-list', $('#logo')[0].src);

    On the other end, getData() allows you to query for content by type (eg. text/html followed by text/plain). This, in turn, allows you to decide on acceptable content types at the time of the drop event or even during dragover to offer feedback for unacceptable types during the drag.

    Here’s another example from the receiving end of the example code:

        var dt = ev.originalEvent.dataTransfer;    
        $('.content_url .content').text(dt.getData('text/uri-list'));
        $('.content_text .content').text(dt.getData('text/plain'));
        $('.content_html .content').html(dt.getData('text/html'));

    Where dataTransfer really shines, though, is that it allows your drop targets to receive content from sources outside your defined draggable elements and even from outside the browser altogether. Firefox accepts such drags, and attempts to populate dataTransfer with appropriate content types extracted from the external object.

    Thus, you could select some text in a word processor window and drop it into one of your elements, and at least expect to find it available as text/plain content.

    You can also select content in another browser window, and expect to see text/html appear in your events. Check out the outline editing demo and see what happens when you try dragging various elements (eg. images, tables, and lists) and highlighted content from other windows onto the items there.

    Using Drag Feedback Images

    An important aspect of the drag and drop interaction is a representation of the thing being dragged. By default in Firefox, this is a “ghost” image of the dragged element itself. But, the dataTransfer property of the original Event object exposes the method setDragImage() for use in customizing this representation.

    There’s a live demo of this feature, as well as associated JS code available. The gist, however, is sketched out in these code snippets:

        var dt = ev.originalEvent.dataTransfer;    
     
        dt.setDragImage( $('#feedback_image h2')[0], 0, 0);
     
        dt.setDragImage( $('#logo')[0], 32, 32); 
     
        var canvas = document.createElement("canvas");
        canvas.width = canvas.height = 50;
     
        var ctx = canvas.getContext("2d");
        ctx.lineWidth = 8;
        ctx.moveTo(25,0);
        ctx.lineTo(50, 50);
        ctx.lineTo(0, 50);
        ctx.lineTo(25, 0);
        ctx.stroke();
     
        dt.setDragImage(canvas, 25, 25);

    You can supply a DOM node as the first parameter to setDragImage(), which includes everything from text to images to <canvas> elements. The second two parameters indicate at what left and top offset the mouse should appear in the image while dragging.

    For example, since the #logo image is 64×64, the parameters in the second setDragImage() method places the mouse right in the center of the image. On the other hand, the first call positions the feedback image such that the mouse rests in the upper left corner.

    Using Drop Effects

    As mentioned at the start of this article, the drag and drop interaction has been used to support actions such as copying, moving, and linking. Accordingly, the HTML 5 specification accomodates these operations in the form of the effectAllowed and dropEffect properties exposed by the Event object.

    For a quick fix, check out the a live demo of this feature, as well as the associated JS code.

    The basic idea is that the dragstart event listener can set a value for effectAllowed like so:

        var dt = ev.originalEvent.dataTransfer;
        switch (ev.target.id) {
            case 'effectdrag0': dt.effectAllowed = 'copy'; break;
            case 'effectdrag1': dt.effectAllowed = 'move'; break;
            case 'effectdrag2': dt.effectAllowed = 'link'; break;
            case 'effectdrag3': dt.effectAllowed = 'all'; break;
            case 'effectdrag4': dt.effectAllowed = 'none'; break;
        }

    The choices available for this property include the following:

    none
    no operation is permitted
    copy
    copy only
    move
    move only
    link
    link only
    copyMove
    copy or move only
    copyLink
    copy or link only
    linkMove
    link or move only
    all
    copy, move, or link

    On the other end, the dragover event listener can set the value of the dropEffect property to indicate the expected effect invoked on a successful drop. If the value does not match up with effectAllowed, the drop will be considered cancelled on completion.

    In the a live demo, you should be able to see that only elements with matching effects can be dropped into the appropriate drop zones. This is accomplished with code like the follwoing:

        var dt = ev.originalEvent.dataTransfer;
        switch (ev.target.id) {
            case 'effectdrop0': dt.dropEffect = 'copy'; break;
            case 'effectdrop1': dt.dropEffect = 'move'; break;
            case 'effectdrop2': dt.dropEffect = 'link'; break;
            case 'effectdrop3': dt.dropEffect = 'all'; break;
            case 'effectdrop4': dt.dropEffect = 'none'; break;
        }

    Although the OS itself can provide some feedback, you can also use these properties to update your own visible feedback, both on the dragged element and on the drop zone itself.

    Conclusion

    The new first-class drag and drop events in HTML5 and Firefox make supporting this form of UI interaction simple, concise, and powerful in the browser. But beyond the new simplicity of these events, the ability to transfer content between applications opens brand new avenues for web-based applications and collaboration with desktop software in general.

  8. geolocation in Firefox 3.5

    This post is from Doug Turner, one of the engineers who is behind the Geolocation support in Firefox 3.5.

    Location is all around us. As of this writing, I am in a coffee shop in Toronto, Canada. If I type google into the url bar, it takes me to www.google.ca, the Canadian version of Google, based on my IP address. And when I want to find the closest movie theater to where I am located, I typically just type in my postal code. That information is often stored with the site so that it’s easier to find the movie theater next time. In these two situations, having the web application automatically figure out where I am is much more convenient. In fact, I have no idea what the postal code is for Toronto. I know how to find it, but that is a lot of work to simply tell a web application where I am.

    Firefox 3.5 includes a simple JavaScript API that allows you to quickly geo-enable your web application. It allows users to optionally share their location with websites without having to type in a postal code. What follows is a general overview of how to use geolocation in Firefox 3.5, how it works, and examines some of the precautions you should take when using geolocation.

    The Basics

    Getting the user’s location is very easy:

    function showPosition(position) {
        alert(position.coords.latitude + “ “ +
        position.coords.longitude);
    }
     
    navigator.geolocation.getCurrentPosition(showPosition);

    The call to getCurrentPosition gets the users current location and puts it into an alert dialog. Location is expressed in terms of longitude and latitude. Yes, it’s that simple.

    When you ask for that information the user will see a notification bar like this:

    Their options are to allow or not, and to remember or not.

    Handling Errors

    It is important to handle two error cases in your code:

    First, the user can deny or not respond to the request for location information. The API allows you to set an optional error callback. If the user explicitly cancels the request your error callback will be called with an error code. In the case where the user doesn’t respond, no callback will be fired. To handle that case you should include a timeout parameter to the getCurrentPosition call and you will get an error callback when the timer expires.

    navigator.geolocation.getCurrentPosition(successCallback,
                                             errorCallback,
                                             {timeout:30000});

    With this code your errorCallback function will be called if the user cancels. It will also be called if 30 seconds pass and the user hasn’t responded.

    Second, the accuracy of the user’s location can vary quite a bit over time. This might happen for a few reasons:

    • Different methods for determining a person’s location have different levels of accuracy.
    • The user might choose not to share his or her exact location with you.
    • Many GPS devices have limited accuracy depending on the view of the sky. Over time if your view of the sky gets worse, so can the accuracy.
    • Many GPS devices can take several minutes to go from a very rough location to a very specific location, even with a good view of the sky.

    These cases can and will happen, and supporting changes in accuracy is important to provide a good user experience.

    If you want to monitor the location as it changes, you can use the watchPosition callback API:

    navigator.geolocation.watchPosition(showPosition);

    showPosition will be called every time there is a position change.

    Note that you can also watch for changes in position by calling getCurrentPosition on a regular basis. But for power savings and performance reasons we suggest that you use watchPosition when you can. Callback APIs generally save power and are only called when required. This will make the browser more responsive, especially on mobile devices.

    For more information, please take a look at the API draft specification which has other examples which may be useful.

    Under the Hood

    There are a few common ways to get location information. The most common are local WiFi networks, IP address information, and attached GPS devices. In Firefox 3.5 we use local WiFi networks and IP address information to try and guess your location.

    There are a few companies that drive cars around listening for WiFi access points spots and recording the access point’s signal strength at a specific point on the planet. They then throw all of this collected data into a big database. Then they have algorithms that allow you to ask the question “If I see these access points, where am I?”. This is Firefox 3.5′s primary way of figuring out your location.

    But not everyone has a WiFi card. And not every place has been scanned for WiFi. In that case Firefox will use your local IP address to try to figure out your location using a reverse database lookup that roughly maps your IP address to a location. IP derived locations often have much lower accuracy than a WiFi derived location. As an example, here in Toronto, a location from WiFi is accurate to within 150 meters. The same location using just an IP address is about 25 km.

    Privacy

    Protecting a user’s privacy is very important to Mozilla – it’s part of our core mission. Within the realm of data that people collect online, location can be particularly sensitive. In fact, the EU considers location information personally identifiable information (PII) and must be handled accordingly (Directive 95/46/EC). We believe that users should have strict control over when their data is shared with web sites. This is why Firefox asks before sharing location information with a web site, allows users to easily “forget” all of the places that the user has shared their location with, and surfaces sharing settings in Page Info.

    Firefox does what it can to protect our users’ privacy but in addition the W3C Geolocation working group has proposed these privacy considerations for web site developers and operators:

    • Recipients must only request location information when necessary.
    • Recipients must only use the location information for the task for which it was provided to them.
    • Recipients must dispose of location information once that task is completed, unless expressly permitted to retain it by the user.
    • Recipients must also take measures to protect this information against unauthorized access.
    • If location information is stored, users should be allowed to update and delete this information.
    • The recipient of location information must not retransmit the location information without the user’s consent. Care should be taken when retransmitting and use of HTTPS is encouraged.
    • Recipients must clearly and conspicuously disclose the fact that they are collecting location data, the purpose for the collection, how long the data is retained, how the data is secured, how the data is shared if it is shared, how users may access, update and delete the data, and any other choices that users have with respect to the data. This disclosure must include an explanation of any exceptions to the guidelines listed above.

    Obviously these are voluntary suggestions, but we hope that it forms a basis for good web site behavior that users will help enforce.

    Caveats

    We have implemented the first public draft of the Geolocation specification from the W3C . Some minor things may change, but we will encourage the working group to maintain backwards compatibly.

    The only issue that we know about that may effect you is the possible renaming of enableHighAccuracy to another name such as useLowPower. Firefox 3.5 includes the enableHighAccuracy call for compatibility reasons, although it doesn’t do anything at the moment. If the call is renamed, we are very likely to include both versions for compatibility reasons.

    Conclusion

    Firefox 3.5 represents the first step in support for Geolocation and a large number of other standards that are starting to make their way out of the various working groups. We know that people will love this feature for mapping applications, photo sites and sites like twitter and facebook. What is most interesting to us is knowing that people will find new uses for this that we haven’t even thought of. The web is changing and location information plays a huge role in that. And we’re happy to be a part of it.

  9. ECMAScript 5 strict mode in Firefox 4

    Editor’s note: This article is posted by Chris Heilmann but authored by Jeff Walden – credit where credit is due.

    Developers in the Mozilla community have made major improvements to the JavaScript engine in Firefox 4. We have devoted much effort to improving performance, but we’ve also worked on new features. We have particularly focused on ECMAScript 5, the latest update to the standard underlying JavaScript.

    Strict mode is arguably the most interesting new feature in ECMAScript 5. It’s a way to opt in to a restricted variant of JavaScript. Strict mode isn’t just a subset: it intentionally has different semantics from normal code. Browsers not supporting strict mode will run strict mode code with different behavior from browsers that do, so don’t rely on strict mode without feature-testing for support for the relevant aspects of strict mode.

    Strict mode code and non-strict mode code can coexist, so scripts can opt into strict mode incrementally. Strict mode blazes a path to future ECMAScript editions where new code with a particular <script type="..."> will likely automatically be executed in strict mode.

    What does strict mode do? First, it eliminates some JavaScript pitfalls that didn’t cause errors by changing them to produce errors. Second, it fixes mistakes that make it difficult for JavaScript engines to perform optimizations: strict mode code can sometimes be made to run faster than identical code that’s not strict mode. Firefox 4 generally hasn’t optimized strict mode yet, but subsequent versions will. Third, it prohibits some syntax likely to be defined in future versions of ECMAScript.

    Invoking strict mode

    Strict mode applies to entire scripts or to individual functions. It doesn’t apply to block statements enclosed in {} braces; attempting to apply it to such contexts does nothing. eval code, event handler attributes, strings passed to setTimeout, and the like are entire scripts, and invoking strict mode in them works as expected.

    Strict mode for scripts

    To invoke strict mode for an entire script, put the exact statement "use strict"; (or 'use strict';) before any other statements.

    // Whole-script strict mode syntax
    "use strict";
    var v = "Hi!  I'm a strict mode script!";

    This syntax has a trap that has already bitten a major site: it isn’t possible to blindly concatenate non-conflicting scripts. Consider concatenating a strict mode script with a non-strict mode script: the entire concatenation looks strict! The inverse is also true: non-strict plus strict looks non-strict. Concatenation of strict mode scripts with each other is fine, and concatenation of non-strict mode scripts is fine. Only crossing the streams by concatenating strict and non-strict scripts is problematic.

    Strict mode for functions

    Likewise, to invoke strict mode for a function, put the exact statement "use strict"; (or 'use strict';) in the function’s body before any other statements.

    function strict()
    {
      // Function-level strict mode syntax
      'use strict';
      function nested() { return "And so am I!"; }
      return "Hi!  I'm a strict mode function!  " + nested();
    }
    function notStrict() { return "I'm not strict."; }

    Changes in strict mode

    Strict mode changes both syntax and runtime behavior. Changes generally fall into these categories:

    • Converting mistakes into errors (as syntax errors or at runtime)
    • Simplifying how the particular variable for a given use of a name is computed
    • Simplifying eval and arguments
    • Making it easier to write “secure” JavaScript
    • Anticipating future ECMAScript evolution

    Converting mistakes into errors

    Strict mode changes some previously-accepted mistakes into errors. JavaScript was designed to be easy for novice developers, and sometimes it gives operations which should be errors non-error semantics. Sometimes this fixes the immediate problem, but sometimes this creates worse problems in the future. Strict mode treats these mistakes as errors so that they’re discovered and promptly fixed.

    First, strict mode makes it impossible to accidentally create global variables. In normal JavaScript, mistyping a variable in an assignment creates a new property on the global object and continues to “work” (although future failure is possible: likely, in modern JavaScript). Assignments which would accidentally create global variables instead throw errors in strict mode:

    "use strict";
    mistypedVaraible = 17; // throws a ReferenceError

    Second, strict mode makes assignments which would otherwise silently fail throw an exception. For example, NaN is a non-writable global variable. In normal code assigning to NaN does nothing; the developer receives no failure feedback. In strict mode assigning to NaN throws an exception. Any assignment that silently fails in normal code will throw errors in strict mode:

    "use strict";
    NaN = 42; // throws a TypeError
    var obj = { get x() { return 17; } };
    obj.x = 5; // throws a TypeError
    var fixed = {};
    Object.preventExtensions(fixed);
    fixed.newProp = "ohai"; // throws a TypeError

    Third, if you attempt to delete undeletable properties, strict mode throws errors (where before the attempt would simply have no effect):

    "use strict";
    delete Object.prototype; // throws a TypeError

    Fourth, strict mode requires that all properties named in an object literal be unique. Normal code may duplicate property names, with the last one determining the property’s value. But since only the last one does anything, the duplication is simply a vector for bugs, if the code is modified to change the property value other than by changing the last instance. Duplicate property names are a syntax error in strict mode:

    "use strict";
    var o = { p: 1, p: 2 }; // !!! syntax error

    Fifth, strict mode requires that function argument names be unique. In normal code the last duplicated argument hides previous identically-named arguments. Those previous arguments remain available through arguments[i], so they’re not completely inaccessible. Still, this hiding makes little sense and is probably undesirable (it might hide a typo, for example), so in strict mode duplicate argument names are a syntax error:

    function sum(a, a, c) // !!! syntax error
    {
      "use strict";
      return a + b + c; // wrong if this code ran
    }

    Sixth, strict mode forbids octal syntax. Octal syntax isn’t part of ECMAScript, but it’s supported in all browsers by prefixing the octal number with a zero: 0644 === 420 and "\\045" === "%". Novice developers sometimes believe a leading zero prefix has no semantic meaning, so they use it as an alignment device — but this changes the number’s meaning! Octal syntax is rarely useful and can be mistakenly used, so strict mode makes octal a syntax error:

    "use strict";
    var sum = 015 + // !!! syntax error
              197 +
              142;

    Simplifying variable uses

    Strict mode simplifies how variable uses map to particular variable definitions in the code. Many compiler optimizations rely on the ability to say that this variable is stored in this location: this is critical to fully optimizing JavaScript code. JavaScript sometimes makes this basic mapping of name to variable definition in the code impossible to perform except at runtime. Strict mode removes most cases where this happens, so the compiler can better optimize strict mode code.

    First, strict mode prohibits with. The problem with with is that any name in it might map either to a property of the object passed to it, or to a variable in surrounding code, at runtime: it’s impossible to know which beforehand. Strict mode makes with a syntax error, so there’s no chance for a name in a with to refer to an unknown location at runtime:

    "use strict";
    var x = 17;
    with (obj) // !!! syntax error
    {
      // If this weren't strict mode, would this be var x, or
      // would it instead be obj.x?  It's impossible in general
      // to say without running the code, so the name can't be
      // optimized.
      x;
    }

    The simple alternative of assigning the object to a variable, then accessing the corresponding property on that variable, stands ready to replace with.

    Second, eval of strict mode code does not introduce new variables into the surrounding code. In normal code eval("var x;") introduces a variable x into the surrounding function or the global scope. This means that, in general, in a function containing a call to eval, every name not referring to an argument or local variable must be mapped to a particular definition at runtime (because that eval might have introduced a new variable that would hide the outer variable). In strict mode eval creates variables only for the code being evaluated, so eval can’t affect whether a name refers to an outer variable or some local variable:

    var x = 17;
    var evalX = eval("'use strict'; var x = 42; x");
    assert(x === 17);
    assert(evalX === 42);

    Relatedly, if the function eval is invoked by an expression of the form eval(...) in strict mode code, the code will be evaluated as strict mode code. The code may explicitly invoke strict mode, but it’s unnecessary to do so.

    function strict1(str)
    {
      "use strict";
      return eval(str); // str will be treated as strict mode code
    }
    function strict2(f, str)
    {
      "use strict";
      return f(str); // not eval(...): str is strict iff it invokes strict mode
    }
    function nonstrict(str)
    {
      return eval(str); // str is strict iff it invokes strict mode
    }
    strict1("'Strict mode code!'");
    strict1("'use strict'; 'Strict mode code!'");
    strict2(eval, "'Non-strict code.'");
    strict2(eval, "'use strict'; 'Strict mode code!'");
    nonstrict("'Non-strict code.'");
    nonstrict("'use strict'; 'Strict mode code!'");

    Third, strict mode forbids deleting plain names. Thus names in strict mode eval code behave identically to names in strict mode code not being evaluated as the result of eval. Using delete name in strict mode is a syntax error:

    "use strict";
    eval("var x; delete x;"); // !!! syntax error

    Making eval and arguments simpler

    Strict mode makes arguments and eval less bizarrely magical. Both involve a considerable amount of magical behavior in normal code: eval to add or remove bindings and to change binding values, and arguments by its indexed properties aliasing named arguments. Strict mode makes great strides toward treating eval and arguments as keywords, although full fixes will not come until a future edition of ECMAScript.

    First, the names eval and arguments can’t be bound or assigned in language syntax. All these attempts to do so are syntax errors:

    "use strict";
    eval = 17;
    arguments++;
    ++eval;
    var obj = { set p(arguments) { } };
    var eval;
    try { } catch (arguments) { }
    function x(eval) { }
    function arguments() { }
    var y = function eval() { };
    var f = new Function("arguments", "'use strict'; return 17;");

    Second, strict mode code doesn’t alias properties of arguments objects created within it. In normal code within a function whose first argument is arg, setting arg also sets arguments[0], and vice versa (unless no arguments were provided or arguments[0] is deleted). For strict mode functions, arguments objects store the original arguments when the function was invoked. The value of arguments[i] does not track the value of the corresponding named argument, nor does a named argument track the value in the corresponding arguments[i].

    function f(a)
    {
      "use strict";
      a = 42;
      return [a, arguments[0]];
    }
    var pair = f(17);
    assert(pair[0] === 42);
    assert(pair[1] === 17);

    Third, arguments.callee is no longer supported. In normal code arguments.callee refers to the enclosing function. This use case is weak: simply name the enclosing function! Moreover, arguments.callee substantially hinders optimizations like inlining functions, because it must be made possible to provide a reference to the un-inlined function if arguments.callee is accessed. For strict mode functions, arguments.callee is a non-deletable property which throws an error when set or retrieved:

    "use strict";
    var f = function() { return arguments.callee; };
    f(); // throws a TypeError

    “Securing” JavaScript

    Strict mode makes it easier to write “secure” JavaScript. Some websites now provide ways for users to write JavaScript which will be run by the website on behalf of other users. JavaScript in browsers can access the user’s private information, so such JavaScript must be partially transformed before it is run, to censor access to forbidden functionality. JavaScript’s flexibility makes it effectively impossible to do this without many runtime checks. Certain language functions are so pervasive that performing runtime checks has considerable performance cost. A few strict mode tweaks, plus requiring that user-submitted JavaScript be strict mode code and that it be invoked in a certain manner, substantially reduce the need for those runtime checks.

    First, the value passed as this to a function in strict mode isn’t boxed into an object. For a normal function, this is always an object: the provided object if called with an object-valued this; the value, boxed, if called with a Boolean, string, or number this; or the global object if called with an undefined or null this. (Use call, apply, or bind to specify a particular this.) Automatic boxing is a performance cost, but exposing the global object in browsers is a security hazard, because the global object provides access to functionality “secure” JavaScript environments must invariably. Thus for a strict mode function, the specified this is used unchanged:

    "use strict";
    function fun() { return this; }
    assert(fun() === undefined);
    assert(fun.call(2) === 2);
    assert(fun.apply(null) === null);
    assert(fun.call(undefined) === undefined);
    assert(fun.bind(true)() === true);

    (Tangentially, built-in methods also now won’t box this if it is null or undefined. [This change is independent of strict mode but is motivated by the same concern about exposing the global object.] Historically, passing null or undefined to a built-in method like Array.prototype.sort() would act as if the global object had been specified. Now passing either value as this to most built-in methods throws a TypeError. Booleans, numbers, and strings are still boxed by these methods: it’s only when these methods would otherwise act on the global object that they’ve been changed.)

    Second, in strict mode it’s no longer possible to “walk” the JavaScript stack via commonly-implemented extensions to ECMAScript. In normal code with these extensions, when a function fun is in the middle of being called, fun.caller is the function that most recently called fun, and fun.arguments is the arguments for that invocation of fun. Both extensions are problematic for “secure” JavaScript, because they allow “secured” code to access “privileged” functions and their (potentially unsecured) arguments. If fun is in strict mode, both fun.caller and fun.arguments are non-deletable properties which throw an error when set or retrieved:

    function restricted()
    {
      "use strict";
      restricted.caller;    // throws a TypeError
      restricted.arguments; // throws a TypeError
    }
    function privilegedInvoker()
    {
      return restricted();
    }
    privilegedInvoker();

    Third, arguments for strict mode functions no longer provide access to the corresponding function call’s variables. In some old ECMAScript implementations arguments.caller was an object whose properties aliased variables in that function. This is a security hazard because it breaks the ability to hide privileged values via function abstraction; it also precludes most optimizations. For these reasons no recent browsers implement it. Yet because of its historical functionality, arguments.caller for a strict mode function is also a non-deletable property which throws an error when set or retrieved:

    "use strict";
    function fun(a, b)
    {
      "use strict";
      var v = 12;
      return arguments.caller; // throws a TypeError
    }
    fun(1, 2); // doesn't expose v (or a or b)

    Paving the way for future ECMAScript versions

    Future ECMAScript versions will likely introduce new syntax, and strict mode in ECMAScript 5 applies some restrictions to ease the transition. It will be easier to make some changes if the foundations of those changes are prohibited in strict mode.

    First, in strict mode a short list of identifiers become reserved keywords. These words are implements, interface, let, package, private, protected, public, static, and yield. In strict mode, then, you can’t name or use variables or arguments with these names. A Mozilla-specific caveat: if your code is JavaScript 1.7 or greater (you’re chrome code, or you’ve used the right <script type="">) and is strict mode code, let and yield have the functionality they’ve had since those keywords were first introduced. But strict mode code on the web, loaded with <script src=""> or <script>...</script>, won’t be able to use let/yield as identifiers.

    Second, strict mode prohibits function statements not at the top level of a script or function. In normal code in browsers, function statements are permitted “everywhere”. This is not part of ES5! It’s an extension with incompatible semantics in different browsers. Future ECMAScript editions hope to specify new semantics for function statements not at the top level of a script or function. Prohibiting such function statements in strict mode “clears the deck” for specification in a future ECMAScript release:

    "use strict";
    if (true)
    {
      function f() { } // !!! syntax error
      f();
    }
    for (var i = 0; i &lt; 5; i++)
    {
      function f2() { } // !!! syntax error
      f2();
    }
    function baz() // kosher
    {
      function eit() { } // also kosher
    }

    This prohibition isn’t strict mode proper, because such function statements are an extension. But it is the recommendation of the ECMAScript committee, and browsers will implement it.

    Strict mode in browsers

    Firefox 4 is the first browser to fully implement strict mode. The Nitro engine found in many WebKit browsers isn’t far behind with nearly-complete strict mode support. Chrome has also started to implement strict mode. Internet Explorer and Opera haven’t started to implement strict mode; feel free to send those browser makers feedback requesting strict mode support.

    Browsers don’t reliably implement strict mode, so don’t blindly depend on it. Strict mode changes semantics. Relying on those changes will cause mistakes and errors in browsers which don’t implement strict mode. Exercise caution in using strict mode, and back up reliance on strict mode with feature tests that check whether relevant features of strict mode are implemented.

    To test out strict mode, download a Firefox nightly and start playing. Also consider its restrictions when writing new code and when updating existing code. (To be absolutely safe, however, it’s probably best to wait to use it in production until it’s shipped in browsers.)