Mozilla

JavaScript Articles

Sort by:

View:

  1. Gap between asm.js and native performance gets even narrower with float32 optimizations

    asm.js is a simple subset of JavaScript that is very easy to optimize, suitable for use as a compiler target from languages like C and C++. Earlier this year Firefox could run asm.js code at about half of native speed – that is, C++ code compiled by emscripten could run at about half the speed that the same C++ code could run when compiled natively – and we thought that through improvements in both emscripten (which generates asm.js code from C++) and JS engines (that run that asm.js code), it would be possible to get much closer to native speed.

    Since then many speedups have arrived, lots of them small and specific, but there were also a few large features as well. For example, Firefox has recently gained the ability to optimize some floating-point operations so that they are performed using 32-bit floats instead of 64-bit doubles, which provides substantial speedups in some cases as shown in that link. That optimization work was generic and applied to any JavaScript code that happens to be optimizable in that way. Following that work and the speedups it achieved, there was no reason not to add float32 to the asm.js type system so that asm.js code can benefit from it specifically.

    The work to implement that in both emscripten and SpiderMonkey has recently completed, and here are the performance numbers:

    asm1.5b

    Run times are normalized to clang, so lower is better. The red bars (firefox-f32) represent Firefox running on emscripten-generated code using float32. As the graph shows, Firefox with float32 optimizations can run all those benchmarks at around 1.5x slower than native, or better. That’s a big improvement from earlier this year, when as mentioned before things were closer to 2x slower than native. You can also see the specific improvement thanks to float32 optimizations by comparing to the orange bar (firefox) next to it – in floating-point heavy benchmarks like skinning, linpack and box2d, the speedup is very noticeable.

    Another thing to note about those numbers is that not just one native compiler is shown, but two, both clang and gcc. In a few benchmarks, the difference between clang and gcc is significant, showing that while we often talk about “times slower than native speed”, “native speed” is a somewhat loose term, since there are differences between native compilers.

    In fact, on some benchmarks, like box2d, fasta and copy, asm.js is as close or closer to clang than clang is to gcc. There is even one case where asm.js beats clang by a slight amount, on box2d (gcc also beats clang on that benchmark, by a larger amount, so probably clang’s backend codegen just happens to be a little unlucky there).

    Overall, what this shows is that “native speed” is not a single number, but a range. It looks like asm.js on Firefox is very close to that range – that is, while it’s on average slower than clang and gcc, the amount it is slower by is not far off from how much native compilers differ amongst themselves.

    Note that float32 code generation is off by default in emscripten. This is intentional, as while it can both improve performance as well as ensure the proper C++ float semantics, it also increases code size – due to adding Math.fround calls – which can be detrimental in some cases, especially in JavaScript engines not yet supporting Math.fround.

    There are some ways to work around that issue, such as the outlining option which reduces maximum function size. We have some other ideas on ways to improve code generation in emscripten as well, so we’ll be experimenting with those for a while as well as following when Math.fround gets supported in browsers (so far Firefox and Safari do). Hopefully in the not so far future we can enable float32 optimizations by default in emscripten.

    Summary

    In summary, the graph above shows asm.js performance getting yet closer to native speed. While for the reasons just mentioned I don’t recommend that people build with float32 optimizations quite yet – hopefully soon though! – it’s an exciting increase in performance. And even the current performance numbers – 1.5x slower than native, or better – are not the limit of what can be achieved, as there are still big improvements either under way or in planning, both in emscripten and in JavaScript engines.

  2. Ember Inspector on a Firefox near you

    … or Cross-Browser Add-ons for Fun or Profit

    Browser add-ons are clearly an important web browser feature, at least on the desktop platform, and for a long time Firefox was the browser add-on authors’ preferred target. When Google launched Chrome, this trend on the desktop browsers domain was pretty clear, so their browser provides an add-on api as well.

    Most of the Web DevTools we are used to are now directly integrated into our browser, but they were add-ons not so long time ago, and it’s not strange that new web developer tools are born as add-ons.

    Web DevTools (integrated or add-ons) can motivate web developers to change their browser, and then web developers can push web users to change theirs. So, long story short, it would be interesting and useful to create cross-browser add-ons, especially web devtools add-ons (e.g. to preserve the web neutrality).

    With this goal in mind, I chose Ember Inspector as the target for my cross-browser devtool add-ons experiment, based on the following reasons:

    • It belongs to an emerging and interesting web devtools family (web framework devtools)
    • It’s a pretty complex / real world Chrome extension
    • It’s mostly written in the same web framework by its own community
    • Even if it is a Chrome extension, it’s a webapp built from the app sources using grunt
    • Its JavaScript code is organized into modules and Chrome-specific code is mostly isolated in just a couple of those
    • Plan & Run Porting Effort

      Looking into the ember-extension git repository, we see that the add-on is built from its sources using grunt:

      Ember Extension: chrome grunt build process

      The extension communicates between the developer tools panel, the page and the main extension code via message passing:

      Ember Extension: High Level View

      Using this knowledge, planning the port to Firefox was surprisingly easy:

      • Create new Firefox add-on specific code (register a devtool panel, control the inspected tab)
      • Polyfill the communication channel between the ember_debug module (that is injected into the inspected tab) and the devtool ember app (that is running in the devtools panel)
      • Polyfill the missing non-standard inspect function, which open the DOM Inspector on a DOM Element selected by a defined Ember View id
      • Minor tweaks (isolate remaining Chrome and Firefox specific code, fix CSS -webkit prefixed rules)

      In my opinion this port was particularly pleasant to plan thanks to two main design choices:

      • Modular JavaScript sources which helps to keep browser specific code encapsulated into replaceable modules
      • Devtool panel and code injected into the target tab collaborate exchanging simple JSON messages and the protocol (defined by this add-on) is totally browser agnostic

      Most of the JavaScript modules which compose this extension were already browser independent, so the first step was to bootstrap a simple Firefox Add-on and register a new devtool panel.

      Create a new panel into the DevTools is really simple, and there’s some useful docs about the topic in the Tools/DevToolsAPI page (work in progress).

      Register / unregister devtool panel

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/main.js

      Devtool panel definition

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/devtool-panel.js#L26

      Then, moving to the second step, adapt the code used to create the message channels between the devtool panel and injected code running in the target tab, using content scripts and the low level content worker from the Mozilla Add-on SDK, which are well documented on the official guide and API reference:

      EmberInspector - Workers, Content Scripts and Adapters

      DevTool Panel Workers

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/devtool-panel.js

      Inject ember_debug

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/devtool-panel.js

      Finally hook browser specific code needed to activate the DOM Inspector on a defined DOM Element:

      Inspect DOM element request handler

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/devtool-panel.js#L178

      Evaluate its features and dive into the exchanged messages

      At this point one could wonder: how much useful is a tool like this?, Do I really need it? etc.

      I must admit that I’ve started and completed this port without being an experienced EmberJS developer, but to be able to check if all the original features were working correctly on Firefox and to really understand how this browser add-on helps EmberJS developers during app development/debugging phases (its most important use cases), I’ve started to experiment with EmberJS and I have to say that EmberJS is a very pleasant framework to work with and Ember Inspector is a really important tool to put into our tool belts.

      I’m pretty sure that every medium or large sized JavaScript framework need this kind of DevTool; clearly it will never be an integrated one, because it’s framework-specific and we will get used to this new family of DevTool Add-ons from now on.

      List Ember View, Model Components and Routes

      The first use case is being able to immediately visualize Routes, Views/Components, Models and Controllers our EmberJS app instantiate for us, without too much webconsole acrobatics.

      So its immediately available (and evident) when we open its panel on an EmberJS Apps active in the current browser tab:

      Ember Inspector - ViewTree

      Using these tables we can then inspect all the properties (even computed ones) defined by us or inherited from the ember classes in the actual object hierarchy.

      Using an approach very similar to the Mozilla Remote Debugging Protocol from the integrated DevTools infrastructure (e.g. even when we use devtools locally, they exchange JSON messages over a pipe), the ember_debug component injected into the target tab sends the info it needs about the instantiated EmberJS objects to the devtool panel component, each identified by internally generated reference IDs (similar to the grips concept from the Mozilla Remote Debugging Protocol.

      Ember Extension - JSON messages

      Logging the exchanged messages, we can learn more about the protocol.

      Receive updates about EmberJS view tree info (EmberDebug -> DevtoolPanel):

      Request inspect object (DevtoolPanel -> EmberDebug):

      Receive updates about the requested Object info (DevtoolPanel -> EmberDebug):

      Reach every EmberJS object in the hierarchy from the webconsole

      A less evident but really useful feature is “sendToConsole”, to be able to reach any object/property that we can inspect from the webconsole, from the tables described above.

      When we click the >$E link, which is accessible in the right split panel:

      Ember Inspector - sendToConsole

      The ember devtool panel asks to ember_debug to put the defined object/property into a variable accessible globally in the target tab and named $E, then we can switch to the webconsole and interact with it freely:

      Ember Inspector - sendToConsole

      Request send object to console (DevtoolPanel -> EmberDebug):

      Much more

      These are only some of the feature already present in the Ember Inspector and more features are coming in its upcoming versions (e.g. log and inspect Ember Promises).

      If you already use EmberJS or if you are thinking about trying it, I suggest you to give Ember Inspector a try (on both Firefox or Chrome, if you prefer), it will turn inspecting your EmberJS webapp into a fast and easy task.

      Integrate XPI building into the grunt-based build process

      The last challenge in the road to a Firefox add-on fully integrated into the ember-extension build workflow was xpi building for an add-on based on the Mozilla Add-on SDK integrated into the grunt build process:

      Chrome crx extensions are simply ZIP files, as are Firefox XPI add-ons, but Firefox add-ons based on the Mozilla Add-on SDK needs to be built using the cfx tool from the Add-on SDK package.

      If we want more cross-browser add-ons, we have to help developers to build cross-browser extensions using the same approach used by ember-extension: a webapp built using grunt which will run into a browser add-on (which provides glue code specific to the various browsers supported).

      So I decided to move the grunt plugin that I’ve put together to integrate Add-on SDK common and custom tasks (e.g. download a defined Add-on SDK release, build an XPI, run cfx with custom parameters) into a separate project (and npm package), because it could help to make this task simpler and less annoying.

      Ember Extension: Firefox and Chrome Add-ons grunt build

      Build and run Ember Inspector Firefox Add-on using grunt:

      Following are some interesting fragments from grunt-mozilla-addon-sdk integration into ember-extension (which are briefly documented in the grunt-mozilla-addon-sdk repo README):

      Integrate grunt plugin into npm dependencies: package.json

      Define and use grunt shortcut tasks: Gruntfile.js

      Configure grunt-mozilla-addon-sdk tasks options

      Conclusion

      Especially thanks to the help from the EmberJS/EmberInspector community and its maintainers, Ember Inspector Firefox add-on is officially merged and integrated in the automated build process, so now we can use it on Firefox and Chrome to inspect our EmberJS apps!

      Stable:

      Latest Build

      In this article we’ve briefly dissected an interesting pattern to develop cross-browser devtools add-ons, and introduced a grunt plugin that simplifies integration of Add-on SDK tools into projects built using grunt: https://npmjs.org/package/grunt-mozilla-addon-sdk

      Thanks to the same web first approach Mozilla is pushing in the Apps domain creating cross-browser add-ons is definitely simpler than what we thought, and we all win :-)

      Happy Cross-Browser Extending,
      Luca

  3. The Side Projects of Mozillians: JSFiddle and Meatspac.es

    At Mozilla, we are happy to get the chance to work with a lot of talented people. Therefore, as an on-going series, we wanted to take the opportunity to highlight some of the exciting projects Mozillians work on in their spare time.

    JSFiddle

    JSFiddle is a tool to write web examples (in HTML, JavaScript and CSS) called ‘fiddles’. They can be saved and shared with others or embedded in a website which is perfect for blogs, documentation or tutorials. Created by Piotr Zalewa.

    JSFiddle

    Piotr: I wanted a tool that could help me check if my frontend code was working. I was active on the MooTools scene at the time and we needed a tool to support our users who had questions about the framework and specific bugs to solve. The community is the best motivation. There are about 2,000 developers creating and watching fiddles right now! Many big projects are using JSFiddle for docs (MooTools, HighCharts) or bug requests (jQuery).

    I’m always logged in on the #mootools IRC channel and one day we had a small competition to see who could be the first to answer support questions with only one line of JavaScript code. A user asked a non-trivial question which needed to be answered with both HTML and JavaScript. Our usual workflow was to write an HTML file, run it locally in the browser, copy the code to a Pastebin site then share the link. No one knew of a tool that could do this. The next day I had a prototype created in the evening and it was well accepted. The working but ugly version was completed shortly after. Oskar Krawczyk joined as a designer and the project was ready to be shown to the world.

    It started as Django and MySQL on the server side with MooTools as a frontend framework. Since then the only major change was adding Memcache. Currently we run JSFiddle on 12 servers sponsored by DigitalOcean. 2 database servers, 3 application servers, 2 Memcache, then static files and development servers. I would ideally like to have the database structured in a way that would be easier to scale. The database is huge and updating tables takes a lot of time.

    JSFiddle was designed in the time when most of the JavaScript libraries were running under one framework only. We want to allow users to mix frameworks and add more languages. At the moment you can write in HTML, JavaScript, Coffeescript, CSS and SCSS but I would like to support more languages. We’ve got a full hat of ideas to be implemented but I think it’s better to provide improvements than promises.

    Meatspac.es

    Meatspac.es is a single public channel chat app that generates animated GIFs of users from their camera once they submit a new message. Created by Jen Fong with GIF library support added by Sole Penadés.

    Meatspac.es

    Jen: I’ve been working on various quirky chat apps that involved some form of embedded media so this was an idea I had about getting users to interact beyond typing by posing for the camera and doing a little movement. I also really like GIFs and the fact that they work everywhere. I had been playing with WebRTC here and there and Sole was working on her RTCamera app when I thought: “Could we combine the two worlds? Chat and GIFs?”.

    For the web server I used Nginx which proxies to a long running Node process using Express. The messages and GIFs are stored temporarily in LevelDB with a TTL (time-to-live) that deletes the message, including the GIFs stored as Base64 blobs, after 10 minutes. On the client-side, it uses jQuery, some GIF library files and updates with WebSockets with an AJAX fallback.

    The biggest challenge of the project was surprisingly not code related! It was largely keeping up with all the craziness when a flood of people started using the chat, tweeting at me and contacting me. I first mentioned it publicly at ‘RealTimeConf’ in Portland a few weeks prior then started tweeting about it. After that a bunch of people checked it out, and someone posted it on Hacker News where even more people came (around 8,000 people on the heaviest day). It was mentioned on Twitter and various sources for a few days after.

    People can be really creative during their GIF creation. It was also interesting to watch people give each other humorous ‘-bro’ nicknames; both women and men. They would always ask others what their name should be rather than giving themselves a name.

    I am now working on a similar app but for one to many GIF chatting for Firefox OS called chatspaces. Anyone who is interested in contributing can watch the repository and check the README for what to contribute.

  4. Handling click-to-activate plugins using JavaScript

    From Firefox 26 onwards — and in the case of insecure Flash/Java in older Firefox versions — most plugins will not be automatically activated. We therefore can no longer plugins starting immediately after they have been inserted into the page. This article covers JavaScript techniques we can employ to handle plugins, making it less likely that affected sites will break.

    Using a script to determine if a plugin is installed

    To detect if a plugin is actually installed, we can query navigator.mimeTypes for the plugin MIME type we intend to use, to differentiate between plugins that are not installed and those that are click-to-activate. For example:

    function isJavaAvailable() {
        return 'application/x-java-applet' in navigator.mimeTypes;
    }

    Note: Do not iterate through navigator.mimeTypes or navigator.plugins, as enumeration may well be removed as a privacy measure in a future version of Firefox.

    Using a script callback to determine when a plugin is activated

    The next thing to be careful of is scripting plugins immediately after instances are created on the page, to avoid breakage due to the plugin not being properly loaded. The plugin should make a call into JavaScript after it is created, using NPRuntime scripting:

    function pluginCreated() {
        document.getElementById('myPlugin').callPluginMethod();
    }
    <object type="application/x-my-plugin" data="somedata.mytype" id="myPlugin">
      <param name="callback" value="pluginCreated()">
    </object>

    Note that the “callback” parameter (or something equivalent) must be implemented by your plugin. This can be done in Flash using the flash.external.ExternalInterface API, or in Java using the netscape.javascript package.

    Using properties on the plugin to determine when it activated

    When using a plugin that doesn’t allow us to specify callbacks and we can’t modify it, an alternative technique is to test for properties that the plugin should have, using code constructs like so:

    <p id="myNotification">Waiting for the plugin to activate!</p>
    <object id="myPlugin" type="application/x-my-plugin"></object>
    window.onload = function () {
        if (document.getElementById('myPlugin').myProperty !== undefined) {
            document.getElementById('myNotification').style.display = 'none';
            document.getElementById('myPlugin').callPluginMethod();  
        } else {
            console.log("Plugin not activated yet.");
            setTimeout(checkPlugin, 500);
        }
    }

    Making plugins visible on the page

    When a site wants the user to enable a plugin, the primary indicator is that the plugin is visible on the page, for example:

    Screenshot of the silverlight plugin activation on the Netflix website.

    If a page creates a plugin that is very small or completely hidden, the only visual indication to the user is the small icon in the Firefox location bar. Even if the plugin element will eventually be hidden, pages should create the plugin element visible on the page, and then resize or hide it only after the user has activated the plugin. This can be done in a similar fashion to the callback technique we showed above:

    function pluginCreated() {
      // We don't need to see the plugin, so hide it by resizing
      var plugin = document.getElementById('myPlugin');
      plugin.height = 0;
      plugin.width = 0;
      plugin.callPluginMethod();
    }
    <!-- Give the plugin an initial size so it is visible -->
    <object type="application/x-my-plugin" data="somedata.mytype" id="myPlugin" width="300" height="300">
      <param name="callback" value="pluginCreated()">
    </object>

    Note: For more basic information on how plugins operate in Firefox, read Why do I have to click to activate plugins? on support.mozilla.org.

  5. Using JSFiddle to Prototype Firefox OS Apps

    Dancing to the Tune of the Fiddle

    JSFiddle is a fantastic prototyping and code review tool. It’s great for getting out a quick test case or code concept without having to spool up your full tool chain and editor. Further, it’s a great place to paste ill-behaved code so that others can review it and ideally help you get at the root of your problem.

    Now you’re able to not only prototype snippets of code, but Firefox OS apps as well. We’re very excited about this because for a while now we’ve been trying to make sure developers understand that creating a Firefox OS app is just like creating a web app. By tinkering with JSFiddle live in your browser, we think you’ll see just how easy it is and the parallels will be more evident.

    Fiddling a Firefox OS App: The Summary

    Here are the steps that you need to go through to tinker with Firefox OS apps using JSFiddle:

    1. Write your code as you might normally when making a JSFiddle
    2. Append /webapp.manifest to the URL of your Fiddle URL to and then paste this link into the Firefox OS simulator to install the app
    3. Alternatively, append /fxos.html to your Fiddle URL to get an install page like a typical Firefox OS hosted application

    I’ve created a demo JSFiddle here that we will go over in detail in the next section.

    Fiddling a Firefox OS App: In Detail

    Write Some Code

    Let’s start with a basic “Hello World!”, a familiar minimal implementation. Implement the following code in your Fiddle:

    HTML:

    <h1>Hello world!</h1>

    CSS

    h1 {
        color: #f00;
    }

    JavaScript

    alert(document.getElementsByTagName('h1')[0].innerHTML);

    Your Fiddle should resemble the following:

    Hello world Firefox OS JSFiddle

    Then, append /manifest.webapp to the end of your Fiddle URL. Using my demo Fiddle as an example, we end up with http://jsfiddle.net/afabbro/vrVAP/manifest.webapp

    Copy this URL to your clipboard. Depending on your browser behavior, it may or may not copy with ‘http://’ intact. Please note that the simulator will not accept any URLs where the protocol is not specified explicitly. So, if it’s not there – add it. The simulator will highlight this input box with a red border when the URL is invalid.

    If you try and access your manifest.webapp from your browser navigation bar, you should end up downloading a copy of the auto-generated manifest that you can peruse. For example, here is the manifest for my test app:

    {
      "version": "0",
      "name": "Hello World Example",
      "description": "jsFiddle example",
      "launch_path": "/afabbro/vrVAP/app.html",
      "icons": {
        "16": "/favicon.png",
        "128": "/img/jsf-circle.png"
      },
      "developer": {
        "name": "afabbro"
      },
      "installs_allowed_from": ["*"],
      "appcache_path": "http://fiddle.jshell.net/afabbro/vrVAP/cache.manifest",
      "default_locale": "en"
    }

    If you haven’t written a manifest for a Firefox OS app before, viewing this auto-generated one will give you an idea of what bits of information you need to provide for your app when you create your own from scratch later.

    Install the App in the Simulator

    Paste the URL that you copied into the field as shown below. As mentioned previously, the field will highlight red if there are any problems with your URL.

    How your URL should look

    After adding, the simulator should boot your app immediately.

    Alert with confirmation button

    You can see that after we dismiss the alert() that we are at a view (a basic HTML page in this case) with a single red h1 tag as we would expect.

    Our Hello World Page in the Simulator

    Install the App From a Firefox OS Device

    In the browser on your Firefox OS device or in the browser provided in the simulator, visit the URL of your Fiddle and append /fxos.html. Using the demo URL as an example again, we obtain: http://jsfiddle.net/afabbro/vrVAP/fxos.html

    Click install, and you should find the app on your home screen.

    Caveats

    This is still very much a new use of the JSFiddle tool, and as such there are still bugs and features we’re hoping to work out for the long term. For instance, at time of writing this article, the following caveats are true:

    1. You can only have one JSFiddle’d app installed in the simulator at a time
    2. There is no offline support

    Thanks

    This JSFiddle hack comes to us courtesy of Piotr Zalewa, who also happens to be working on making PhoneGap build for Firefox OS. Let us know what you think in the comments, and post a link to your Fiddle’s manifest if you make something interesting that you want to show off.

  6. So You Wanna Build a Crowdfunding Site?

    The tools to get funded by the crowd should belong to the crowd.

    That's why I want to show you how to roll your own crowdfunding site, in less than 300 lines of code. Everything in this tutorial is open source, and we'll only use other open-source technologies, such as Node.js, MongoDB, and Balanced Payments.

    Here's the Live Demo.
    All source code and tutorial text is Unlicensed.

    0. Quick Start

    If you just want the final crowdfunding site, clone the crowdfunding-tuts repository and go to the /demo folder.

    All you need to do is set your configuration variables, and you’re ready to go! For everyone who wants the nitty gritty details, carry on.

    1. Setting up a basic Node.js app with Express

    If you haven’t already done so, you’ll need to install Node.js. (duh)

    Create a new folder for your app. We’ll be using the Express.js framework to make things a lot more pleasant. To install the Express node module, run this on the command line inside your app’s folder:

    npm install express

    Next, create a file called app.js, which will be your main server logic. The following code will initialize a simple Express app,
    which just serves a basic homepage and funding page for your crowdfunding site.

    // Configuration
    var CAMPAIGN_GOAL = 1000; // Your fundraising goal, in dollars
     
    // Initialize an Express app
    var express = require('express');
    var app = express();
    app.use("/static", express.static(__dirname + '/static')); // Serve static files
    app.use(express.bodyParser()); // Can parse POST requests
    app.listen(1337); // The best port
    console.log("App running on http://localhost:1337");
     
    // Serve homepage
    app.get("/",function(request,response){
     
        // TODO: Actually get fundraising total
        response.send(
            "<link rel='stylesheet' type='text/css' href='/static/fancy.css'>"+
            "<h1>Your Crowdfunding Campaign</h1>"+
            "<h2>raised ??? out of $"+CAMPAIGN_GOAL.toFixed(2)+"</h2>"+
            "<a href='/fund'>Fund This</a>"
        );
     
    });
     
    // Serve funding page
    app.get("/fund",function(request,response){
        response.sendfile("fund.html");
    });

    Create another file named fund.html. This will be your funding page.

    <link rel='stylesheet' type='text/css' href='/static/fancy.css'>
    <h1>Donation Page:</h1>

    Optionally, you may also include a stylesheet at /static/fancy.css,
    so that your site doesn’t look Hella Nasty for the rest of this tutorial.

    @import url(https://fonts.googleapis.com/css?family=Raleway:200);
    body {
        margin: 100px;
        font-family: Raleway; /* Sexy font */
        font-weight: 200;
    }

    Finally, run node app on the command line to start your server!

    Check out your crowdfunding site so far at http://localhost:1337.

    Crowdfunding Homepage 1

    The homepage will display the Campaign Goal you set in the Configuration section of app.js. The donations page isn’t functional yet, so in the following chapters, I’ll show you how to accept and aggregate credit card payments from your wonderful backers.

    2. Getting started with Balanced Payments

    Balanced Payments isn’t just another payments processor. They’ve open sourced their whole site, their chat logs are publicly available, and they even discuss their roadmap in the open. These people get openness.

    Best of all, you don’t even need to sign up to get started with Balanced!

    Just go to this link, and they’ll generate a brand-new Test Marketplace for you,
    that you can claim with an account afterwards. Remember to keep this tab open, or save the URL, so you can come back to your Test Marketplace later.

    Balanced Test Marketplace

    Click the Settings tab in the sidebar, and note your Marketplace URI and API Key Secret.

    Balanced Settings

    Copy these variables to the Configuration section of app.js like this:

    // Configuration
    var BALANCED_MARKETPLACE_URI = "/v1/marketplaces/TEST-YourMarketplaceURI";
    var BALANCED_API_KEY = "YourAPIKey";
    var CAMPAIGN_GOAL = 1000; // Your fundraising goal, in dollars

    Now, let’s switch back to fund.html to create our actual payment page.

    First, we’ll include and initialize Balanced.js. This Javascript library will securely tokenize the user’s credit card info, so your server never has to handle the info directly. Meaning, you will be free from PCI regulations. Append the following code to fund.html, replacing BALANCED_MARKETPLACE_URI with your actual Marketplace URI:

    <!-- Remember to replace BALANCED_MARKETPLACE_URI with your actual Marketplace URI! -->
    <script src="https://js.balancedpayments.com/v1/balanced.js"></script>
    <script>
        var BALANCED_MARKETPLACE_URI = "/v1/marketplaces/TEST-YourMarketplaceURI";
        balanced.init(BALANCED_MARKETPLACE_URI);
    </script>

    Next, create the form itself, asking for the user’s Name, the Amount they want to donate, and other credit card info. We will also add a hidden input, for the credit card token that Balanced.js will give us. The form below comes with default values for a test Visa credit card. Append this to fund.html:

    <form id="payment_form" action="/pay/balanced" method="POST">
     
        Name: <input name="name" value="Pinkie Pie"/> <br>
        Amount: <input name="amount" value="12.34"/> <br>
        Card Number: <input name="card_number" value="4111 1111 1111 1111"/> <br>
        Expiration Month: <input name="expiration_month" value="4"/> <br>
        Expiration Year: <input name="expiration_year" value="2050"/> <br>
        Security Code: <input name="security_code" value="123"/> <br>
     
        <!-- Hidden inputs -->
        <input type="hidden" name="card_uri"/>
     
    </form>
    <button onclick="charge();">
        Pay with Credit Card
    </button>

    Notice the Pay button does not submit the form directly, but calls a charge() function instead, which we are going to implement next. The charge() function will get the credit card token from Balanced.js,
    add it as a hidden input, and submit the form. Append this to fund.html:

    <script>
     
    // Get card data from form.
    function getCardData(){
        // Actual form data
        var form = document.getElementById("payment_form");
        return {
            "name": form.name.value,
            "card_number": form.card_number.value,
            "expiration_month": form.expiration_month.value,
            "expiration_year": form.expiration_year.value,
            "security_code": form.security_code.value
        };
    }
     
    // Charge credit card
    function charge(){
     
        // Securely tokenize card data using Balanced
        var cardData = getCardData();
        balanced.card.create(cardData, function(response) {
     
            // Handle Errors (Anything that's not Success Code 201)
            if(response.status!=201){
                alert(response.error.description);
                return;
            }
     
            // Submit form with Card URI
            var form = document.getElementById("payment_form");
            form.card_uri.value = response.data.uri;
            form.submit();
     
        });
     
    };
     
    </script>

    This form will send a POST request to /pay/balanced, which we will handle in app.js. For now, we just want to display the card token URI. Paste the following code at the end of app.js:

    // Pay via Balanced
    app.post("/pay/balanced",function(request,response){
     
        // Payment Data
        var card_uri = request.body.card_uri;
        var amount = request.body.amount;
        var name = request.body.name;
     
        // Placeholder
        response.send("Your card URI is: "+request.body.card_uri);
     
    });

    Restart your app, (Ctrl-C to exit, then node app to start again) and go back to http://localhost:1337.

    Your payment form should now look like this:

    Funding Form 1

    The default values for the form will already work, so just go ahead and click Pay With Credit Card. (Make sure you’ve replaced BALANCED_MARKETPLACE_URI in fund.html with your actual Test Marketplace’s URI!) Your server will happily respond with the generated Card URI Token.

    Funding Form 2

    Next up, we will use this token to actually charge the given credit card!

    3. Charging cards through Balanced Payments

    Before we charge right into this, (haha) let’s install two more Node.js modules for convenience.

    Run the following in the command line:

    # A library for simplified HTTP requests.
        npm install request
    npm install q

    A Promises library, to pleasantly handle asynchronous calls and avoid Callback Hell.

    Because we’ll be making multiple calls to Balanced, let’s also create a helper method. The following function returns a Promise that the Balanced API has responded to whatever HTTP Request we just sent it. Append this code to app.js:

    // Calling the Balanced REST API
    var Q = require('q');
    var httpRequest = require('request');
    function _callBalanced(url,params){
     
        // Promise an HTTP POST Request
        var deferred = Q.defer();
        httpRequest.post({
     
            url: "https://api.balancedpayments.com"+BALANCED_MARKETPLACE_URI+url,
            auth: {
                user: BALANCED_API_KEY,
                pass: "",
                sendImmediately: true
            },
            json: params
     
        }, function(error,response,body){
     
            // Handle all Bad Requests (Error 4XX) or Internal Server Errors (Error 5XX)
            if(body.status_code>=400){
                deferred.reject(body.description);
                return;
            }
     
            // Successful Requests
            deferred.resolve(body);
     
        });
        return deferred.promise;
     
    }

    Now, instead of just showing us the Card Token URI when we submit the donation form, we want to:

    1. Create an account with the Card URI
    2. Charge said account for the given amount (note: you’ll have to convert to cents for the Balanced API)
    3. Record the transaction in the database (note: we’re skipping this for now, and covering it in the next chapter)
    4. Render a personalized message from the transaction

    Replace the app.post("/pay/balanced", ... ); callback from the previous chapter with this:

    // Pay via Balanced
    app.post("/pay/balanced",function(request,response){
     
        // Payment Data
        var card_uri = request.body.card_uri;
        var amount = request.body.amount;
        var name = request.body.name;
     
        // TODO: Charge card using Balanced API
        /*response.send("Your card URI is: "+request.body.card_uri);*/
     
        Q.fcall(function(){
     
            // Create an account with the Card URI
            return _callBalanced("/accounts",{
                card_uri: card_uri
            });
     
        }).then(function(account){
     
            // Charge said account for the given amount
            return _callBalanced("/debits",{
                account_uri: account.uri,
                amount: Math.round(amount*100) // Convert from dollars to cents, as integer
            });
     
        }).then(function(transaction){
     
            // Donation data
            var donation = {
                name: name,
                amount: transaction.amount/100, // Convert back from cents to dollars.
                transaction: transaction
            };
     
            // TODO: Actually record the transaction in the database
            return Q.fcall(function(){
                return donation;
            });
     
        }).then(function(donation){
     
            // Personalized Thank You Page
            response.send(
                "<link rel='stylesheet' type='text/css' href='/static/fancy.css'>"+
                "<h1>Thank you, "+donation.name+"!</h1> <br>"+
                "<h2>You donated $"+donation.amount.toFixed(2)+".</h2> <br>"+
                "<a href='/'>Return to Campaign Page</a> <br>"+
                "<br>"+
                "Here's your full Donation Info: <br>"+
                "<pre>"+JSON.stringify(donation,null,4)+"</pre>"
            );
     
        },function(err){
            response.send("Error: "+err);
        });
     
    });

    Now restart your app, and pay through the Donation Page once again. (Note: To cover processing fees, you have to pay more than $0.50 USD) This time, you’ll get a full Payment Complete page, with personalized information!

    Transaction 1

    Furthermore, if you check the transactions tab in your Test Marketplace dashboard, you should find that money has now been added to your balance.

    Transaction 2

    We’re getting close! Next, let’s record donations in a MongoDB database.

    4. Recording donations with MongoDB

    MongoDB is a popular open-source NoSQL database. NoSQL is especially handy for rapid prototyping, because of its dynamic schemas. In other words, you can just make stuff up on the fly.

    This will be useful if, in the future, you want to record extra details about each donation, such as the donator’s email address, reward levels, favorite color, etc.

    Start up a MongoDB database, and get its URI. You can use a remote database with a service such as MongoHQ, but for this tutorial, let’s run MongoDB locally (instructions for installing and running MongoDB on your computer).

    Once you’ve done that, add the MongoDB URI to your Configuration section at the top of app.js.

    // Configuration
    var MONGO_URI = "mongodb://localhost:27017/test";
    var BALANCED_MARKETPLACE_URI = "/v1/marketplaces/TEST-YourMarketplaceURI";
    var BALANCED_API_KEY = "YourAPIKey";
    var CAMPAIGN_GOAL = 1000; // Your fundraising goal, in dollars

    Now, let’s install the native MongoDB driver for Node.js:

    npm install mongodb

    Add the following code to the end of app.js. This will return a Promise that we’ve recorded a donation in MongoDB.

    // Recording a Donation
    var mongo = require('mongodb').MongoClient;
    function _recordDonation(donation){
     
        // Promise saving to database
        var deferred = Q.defer();
        mongo.connect(MONGO_URI,function(err,db){
            if(err){ return deferred.reject(err); }
     
            // Insert donation
            db.collection('donations').insert(donation,function(err){
                if(err){ return deferred.reject(err); }
     
                // Promise the donation you just saved
                deferred.resolve(donation);
     
                // Close database
                db.close();
     
            });
        });
        return deferred.promise;
     
    }

    Previously, we skipped over actually recording a donation to a database.
    Go back, and replace that section of code with this:

    // TODO: Actually log the donation with MongoDB
    /*return Q.fcall(function(){
        return donation;
    });*/
     
    // Record donation to database
    return _recordDonation(donation);

    Restart your app, and make another donation. If you run db.donations.find() on your MongoDB instance, you’ll find the donation you just logged!

    Transaction 3

    Just one step left…

    Finally, we will use these recorded donations to calculate how much money we’ve raised.

    5. Completing the Donation

    Whether it’s showing progress or showing off, you’ll want to tell potential backers how much your campaign’s already raised.

    To get the total amount donated, simply query for all donation amounts from MongoDB, and add them up. Here’s how you do that with MongoDB, with an asynchronous Promise for it. Append this code to app.js:

    // Get total donation funds
    function _getTotalFunds(){
     
        // Promise the result from database
        var deferred = Q.defer();
        mongo.connect(MONGO_URI,function(err,db){
            if(err){ return deferred.reject(err); }
     
            // Get amounts of all donations
            db.collection('donations')
            .find( {}, {amount:1} ) // Select all, only return "amount" field
            .toArray(function(err,donations){
                if(err){ return deferred.reject(err); }
     
                // Sum up total amount, and resolve promise.
                var total = donations.reduce(function(previousValue,currentValue){
                    return previousValue + currentValue.amount;
                },0);
                deferred.resolve(total);
     
                // Close database
                db.close();
     
            });
        });
        return deferred.promise;
     
    }

    Now, let’s go back to where we were serving a basic homepage. Let’s change that, to actually calculate your total funds, and show the world how far along your campaign has gotten.

    // Serve homepage
    app.get("/",function(request,response){
     
        // TODO: Actually get fundraising total
        /*response.send(
            "<link rel='stylesheet' type='text/css' href='/static/fancy.css'>"+
            "<h1>Your Crowdfunding Campaign</h1>"+
            "<h2>raised ??? out of $"+CAMPAIGN_GOAL.toFixed(2)+"</h2>"+
            "<a href='/fund'>Fund This</a>"
        );*/
     
        Q.fcall(_getTotalFunds).then(function(total){
            response.send(
                "<link rel='stylesheet' type='text/css' href='/static/fancy.css'>"+
                "<h1>Your Crowdfunding Campaign</h1>"+
                "<h2>raised $"+total.toFixed(2)+" out of $"+CAMPAIGN_GOAL.toFixed(2)+"</h2>"+
                "<a href='/fund'>Fund This</a>"
            );
        });
     
    });

    Restart the app, and look at your final homepage.

    Crowdfunding Homepage 2

    It’s… beautiful.

    You’ll see that your total already includes the donations recorded from the previous chapter. Make another payment through the Donations Page, and watch your funding total go up.

    Congratulations, you just made your very own crowdfunding site!

    - – -

    Discuss this on Hacker News

  7. Content Security Policy 1.0 lands in Firefox Aurora

    The information in this article is based on work together with Ian Melven, Kailas Patil and Tanvi Vyas.

    We have just landed support for the Content Security Policy (CSP) 1.0
    specification
    in Firefox Aurora (Firefox 23), available as of tomorrow (May 30th). CSP is a security mechanism that aims to protect a website against content injection attacks by providing a whitelist of known-good domain names to accept JavaScript (and other content) from. CSP does this by sending a Content-Security-Policy header with the document it protects (yes, we lost the X prefix with the 1.0 version of the spec).

    To effectively protect against XSS, a few JavaScript features have to be
    disabled:

    • All inline JavaScript is disallowed. This means, that all the JavaScript code must be placed in a separate file that is linked via <script src=... >
    • All calls to functions which allow JavaScript code being executed from strings (e.g., eval) are disabled

    CSP now more intuitive and consistent

    While Firefox has had support for CSP since its invention here at Mozilla, things have been changing a lot. The streamlined development of a specification within the W3C has made the concept more intuitive and consistent. Most directives in a CSP header are now of a unified form which explicitly specifies the type of content you want to restrict:

    • img-src
    • object-src
    • script-src
    • style-src and so on.

    Oh and if you feel like you must allow less secure JavaScript coding styles, you can add the values unsafe-inline or unsafe-eval to your list of script sources. (This used to be inline-script and eval-script before).

    Start protecting your website by implementing CSP now!

    But wait – isn’t that a bit tedious… Writing a complex policy and making sure that you remembered all the resources that your website requires? Don’t fret! Here comes UserCSP again!

    Generate your Content Security Policies with UserCSP!

    During the last few months, Kailas Patil, a student in our Security Mentorship Program has continued his GSoC work from last year to update UserCSP.

    UserCSP is a Firefox add-on that helps web developers and security-minded users use CSP. Web developers can create a Content Security Policy (CSP) for their site by using UserCSP’s infer CSP feature. This feature can list required resource URLs and turn them into a policy ready to plug into a CSP header.

    In addition, UserCSP is the first step to expose a policy enforcement mechanism directly to web users. Furthermore, users can enforce a stricter policy than a page supplies through the add-on or apply a policy to certain websites that don’t currently support CSP.

    While earlier versions of UserCSP were more aligned to content security policies as originally invented at Mozilla, this version is updated to be in compliance with the CSP 1.0 specification. This means that policies derived with this add-on may work in all browsers as soon as they support the specification. Hooray!

    As this evolves and ships, our MDN documentation on Content Security Policy (CSP) will keep on evolving, and we also plan to write more about this in the Mozilla Security Blog in the next few weeks, so stay tuned!

  8. Compiling to JavaScript, and Debugging with Source Maps

    Update 2013/05/29: I have updated the article to reflect recent changes in the source map specification where the //@ syntax for linking a source map to a script has been deprecated in favor of //# due to problems with Internet Explorer.

    This is a tutorial on how to write a compiler which generates JavaScript as its target language, and maintains line and column meta-data in source maps for debugging. Storing line and column coordinates in a source map allows the end-user of the compiler to debug the source code that they wrote, rather than the ugly, generated JavaScript they are not familiar with.

    In this tutorial, we will be compiling a small Reverse Polish Notation, or RPN, language to JavaScript. The language is super simple, and is nothing more than simple arithmetic with variable storage and output capabilities. We are keeping the language simple so that we can focus on integrating source maps with the compiler, rather than language implementation details.

    Availability

    Initial support for source maps in the debugger is available in Firefox 23 (Aurora at time of writing) with more improvements coming in Firefox 24 (Nightly at time of writing). Chrome DevTools also have support for source maps.

    Overview of the Source Language

    RPN uses postfix notation, meaning that the operator follows its two operands. One of the benefits of RPN is that as long as we limit ourselves to binary operators, we do not need any parentheses, and do not need to worry about operator precedence.

    Here is an example program in our source language:

    a 5 =;
    b 3 =;
    c a b + 4 * =;

    This is an equivalent program written in a language which uses infix notation for its arithmetic operators:

    a = 5;
    b = 3;
    c = (a + b) * 4;

    Our language will support addition, subtraction, multiplication, division, assignment, and printing. The print operator’s first operand is the value to print, the second operand is how many times to print the value and must be greater than or equal to one:

    5 1 print;
    # Output:
    # 5
     
    3 4 print;
    # Output:
    # 3
    # 3
    # 3
    # 3
     
    4 print;
    # Syntax error
     
    n -1 =;
    4 n print;
    # Runtime error

    Lastly, division by zero should throw an error:

    5 0 /;
    # Runtime error

    Getting Setup

    We will be writing our compiler on Node.js, using Jison to generate the parser for our language from a grammar, and using the source-map library to help generate source maps.

    The first step is to download and install Node.js if you don’t already have it on your system.

    After you have installed Node.js, use its package manager npm to create a new project for the compiler:

    $ mkdir rpn
    $ cd rpn/
    $ npm init .

    After the last command, npm will prompt you with a bunch of questions. Enter your name and email, answer ./lib/rpn.js for the main module/entry point, and just let npm use the defaults that it supplies for the rest of the questions.

    Once you have finished answering the prompts, create the directory layout for the project:

    $ mkdir lib
    $ touch lib/rpn.js
    $ mkdir -p lib/rpn

    The public API for the compiler will reside within lib/rpn.js, while the submodules we use to implement various things such as the lexer and abstract syntax tree will live in lib/rpn/*.js.

    Next, open up the package.json file and add jison and source-map to the project’s dependencies:

    ...
    "dependencies": {
      "jison": ">=0.4.4",
      "source-map": ">=0.1.22"
    },
    ...

    Now we will install a link to our package in Node.js’s globally installed packages directory. This allows us to import our package from the Node.js shell:

    $ npm link .

    Make sure that everything works by opening the Node.js shell and importing our package:

    $ node
    > require("rpn")
    {}

    Writing the Lexer

    A lexer (also known as a scanner or tokenizer) breaks the inputted raw source code into a stream of semantic tokens. For example in our case, we would want to break the raw input string "5 3 +;" into something like ["5", "3", "+", ";"].

    Because we are using Jison, rather than writing the lexer and parser by hand, our job is much easier. All that is required is to supply a list of rules that describe the types of tokens we are expecting. The left hand side of the rules are regular expressions to match individual tokens, the right hand side are the snippets of code to execute when an instance of the corresponding token type is found. These tokens will be passed on to the parser in the next phase of the compiler.

    Create the rules for lexical analysis in lib/rpn/lex.js:

    exports.lex = {
      rules: [
        ["\\s+",                   "/* Skip whitespace! */"],
        ["#.*\\n",                 "/* Skip comments! */"],
        [";",                      "return 'SEMICOLON'"],
        ["\\-?[0-9]+(\\.[0-9]+)?", "return 'NUMBER';"],
        ["print",                  "return 'PRINT';"],
        ["[a-zA-Z][a-zA-Z0-9_]*",  "return 'VARIABLE';"],
        ["=",                      "return '=';"],
        ["\\+",                    "return '+';"],
        ["\\-",                    "return '-';"],
        ["\\*",                    "return '*';"],
        ["\\/",                    "return '/';"],
        ["$",                      "return 'EOF';"]
      ]
    };

    Writing the Parser

    The parser takes the tokens from the lexer one at a time and confirms that the input is a valid program in our source language.

    Once again, the task of writing the parser is much easier than it would otherwise be thanks to Jison. Rather than writing the parser ourselves, Jison will programmatically create one for us if we provide a grammar for the language.

    If all we cared about was whether the input was a valid program, we would stop here. However, we are also going to compile the input to JavaScript, and to do that we need to create an abstract syntax tree. We build the AST in the code snippets next to each rule.

    A typical grammar contains productions with the form:

    LeftHandSide → RightHandSide1
                 | RightHandSide2
                 ...

    However, in Jison we are a) writing in JavaScript, and b) also providing code to execute for each rule so that we can create the AST. Therefore, we use the following format:

    LeftHandSide: [
      [RightHandSide1, CodeToExecute1],
      [RightHandSide2, CodeToExecute2],
      ...
    ]

    Inside the code snippets, there are a handful of magic variables we have access to:

    • $$: The value of the left hand side of the production.
    • $1/$2/$3/etc: The value of the the nth form in the right hand side of the production.
    • @1/@2/@3/etc: An object containing the line and column coordinates where the nth form in the right hand side of the production was parsed.
    • yytext: The full text of currently matched rule.

    Using this information, we can create the grammar in lib/rpn/bnf.js:

    exports.bnf = {
      start: [
        ["input EOF", "return $$;"]
      ],
      input: [
        ["",           "$$ = [];"],
        ["line input", "$$ = [$1].concat($2);"]
      ],
      line: [
        ["exp SEMICOLON", "$$ = $1;"]
      ],
      exp: [
        ["NUMBER",           "$$ = new yy.Number(@1.first_line, @1.first_column, yytext);"],
        ["VARIABLE",         "$$ = new yy.Variable(@1.first_line, @1.first_column, yytext);"],
        ["exp exp operator", "$$ = new yy.Expression(@3.first_line, @3.first_column, $1, $2, $3);"]
      ],
      operator: [
        ["PRINT", "$$ = new yy.Operator(@1.first_line, @1.first_column, yytext);"],
        ["=",     "$$ = new yy.Operator(@1.first_line, @1.first_column, yytext);"],
        ["+",     "$$ = new yy.Operator(@1.first_line, @1.first_column, yytext);"],
        ["-",     "$$ = new yy.Operator(@1.first_line, @1.first_column, yytext);"],
        ["*",     "$$ = new yy.Operator(@1.first_line, @1.first_column, yytext);"],
        ["/",     "$$ = new yy.Operator(@1.first_line, @1.first_column, yytext);"]
      ]
    };

    Implementing the Abstract Syntax Tree

    Create the definitions for the abstract syntax tree nodes in lib/rpn/ast.js.

    Since we will be maintaining line and column information in all of the AST nodes, we can reuse some code by making a base prototype:

    var AstNode = function (line, column) {
      this._line = line;
      this._column = column;
    };

    The definitions for the rest of the AST nodes are pretty straight forward. Link up the prototype chain, assign relevant attributes, and don’t forget to call AstNode‘s constructor:

    exports.Number = function (line, column, numberText) {
      AstNode.call(this, line, column);
      this._value = Number(numberText);
    };
    exports.Number.prototype = Object.create(AstNode.prototype);
     
    exports.Variable = function (line, column, variableText) {
      AstNode.call(this, line, column);
      this._name = variableText;
    };
    exports.Variable.prototype = Object.create(AstNode.prototype);
     
    exports.Expression = function (line, column, operand1, operand2, operator) {
      AstNode.call(this, line, column);
      this._left = operand1;
      this._right = operand2;
      this._operator = operator;
    };
    exports.Expression.prototype = Object.create(AstNode.prototype);
     
    exports.Operator = function (line, column, operatorText) {
      AstNode.call(this, line, column);
      this.symbol = operatorText;
    };
    exports.Operator.prototype = Object.create(AstNode.prototype);

    Compilation

    Generated JavaScript

    Before we generate JavaScript, we need a plan. There are a couple ways we can structure the outputted JavaScript.

    One strategy is to translate the RPN expressions to the equivalent human readable JavaScript expression we would create if we had been writing JavaScript all along. For example, if we were to port this RPN example:

    a 8 =;
    b 2 =;
    c a b 1 - / =;

    We might write the following JavaScript:

    var a = 8;
    var b = 3;
    var c = a / (b - 1);

    However, this means that we are completely adopting the nuances of JavaScript’s arithmetic. In an earlier example, we saw that a helpful runtime error was thrown when any number was divided by zero. Most languages throw an error when this occurs, however JavaScript does not; instead, the result is Infinity. Therefore, we can’t completely embrace JavaScript’s arithmetic system, and we must generate some code to check for divide-by-zero errors ourselves. Adding this code gets a little tricky if we want to maintain the strategy of generating human readable code.

    Another option is treating the JavaScript interpreter as a stack machine of sorts and generating code that pushes and pops values to and from a stack. Furthermore, stack machines are a natural fit for evaluating RPN. In fact, it is such a good fit that RPN “was independently reinvented by F. L. Bauer and E. W. Dijkstra in the early 1960s to reduce computer memory access and utilize the stack to evaluate expressions.”

    Generating JavaScript code for the same example above, but utilizing the JavaScript interpreter as a stack machine, might look something like this:

    push(8);
    push('a');
    env[pop()] = pop();
    push(2);
    push('b');
    env[pop()] = pop();
    push('a');
    push('b');
    push(1);
    temp = pop();
    push(pop() - temp);
    temp = pop();
    if (temp === 0) throw new Error("Divide by zero");
    push(pop() / temp);
    push('c');
    env[pop()] = pop();

    This is the strategy we will follow. The generated code is a bit larger, and we will require a preamble to define push, pop, etc, but compilation becomes much easier. Furthermore, the fact that the generated code isn’t as human readable only highlights the benefits of using source maps!

    Creating Source Maps

    If we weren’t generating source maps along with our generated JavaScript, we could build the generated code via concatenating strings of code:

    code += "push(" + operand1.compile() + " "
      + operator.compile() + " "
      + operand2.compile() + ");\n";

    However, this doesn’t work when we are creating source maps because we need to maintain line and column information. When we concatenate strings of code, we lose that information.

    The source-map library contains SourceNode for exactly this reason. If we add a new method on our base AstNode prototype, we can rewrite our example like this:

    var SourceNode = require("source-map").SourceNode;
    AstNode.prototype._sn = function (originalFilename, chunk) {
      return new SourceNode(this._line, this._column, originalFilename, chunk);
    };
     
    ...
     
    code = this._sn("foo.rpn", [code,
                                "push(",
                                operand1.compile(), " ",
                                operator.compile(), " ",
                                operand2.compile(), ");\n"]);

    Once we have completed building the SourceNode structure for the whole input program, we can generate the compiled source and the source map by calling the SourceNode.prototype.toStringWithSourceMap method. This method returns an object with two properties: code, which is a string containing the generated JavaScript source code; and map, which is the source map.

    Implementing Compilation

    Now that we have a strategy for generating code, and understand how to maintain line and column information so that we can generate source maps easily, we can add the methods to compile our AST nodes to lib/rpn/ast.js.

    To play nice with the global JavaScript environment, we will namespace push, pop, etc, under __rpn.

    function push(val) {
      return ["__rpn.push(", val, ");\n"];
    }
     
    AstNode.prototype.compile = function (data) {
      throw new Error("Not Yet Implemented");
    };
    AstNode.prototype.compileReference = function (data) {
      return this.compile(data);
    };
    AstNode.prototype._sn = function (originalFilename, chunk) {
      return new SourceNode(this._line, this._column, originalFilename, chunk);
    };
     
    exports.Number.prototype.compile = function (data) {
      return this._sn(data.originalFilename,
                      push(this._value.toString()));
    };
     
    exports.Variable.prototype.compileReference = function (data) {
      return this._sn(data.originalFilename,
                      push(["'", this._name, "'"]));
    };
    exports.Variable.prototype.compile = function (data) {
      return this._sn(data.originalFilename,
                      push(["window.", this._name]));
    };
     
    exports.Expression.prototype.compile = function (data) {
      var temp = "__rpn.temp";
      var output = this._sn(data.originalFilename, "");
     
      switch (this._operator.symbol) {
      case 'print':
        return output
          .add(this._left.compile(data))
          .add(this._right.compile(data))
          .add([temp, " = __rpn.pop();\n"])
          .add(["if (", temp, " <= 0) throw new Error('argument must be greater than 0');\n"])
          .add(["if (Math.floor(", temp, ") != ", temp,
                ") throw new Error('argument must be an integer');\n"])
          .add([this._operator.compile(data), "(__rpn.pop(), ", temp, ");\n"]);
      case '=':
        return output
          .add(this._right.compile(data))
          .add(this._left.compileReference(data))
          .add(["window[__rpn.pop()] ", this._operator.compile(data), " __rpn.pop();\n"]);
      case '/':
        return output
          .add(this._left.compile(data))
          .add(this._right.compile(data))
          .add([temp, " = __rpn.pop();\n"])
          .add(["if (", temp, " === 0) throw new Error('divide by zero error');\n"])
          .add(push(["__rpn.pop() ", this._operator.compile(data), " ", temp]));
      default:
        return output
          .add(this._left.compile(data))
          .add(this._right.compile(data))
          .add([temp, " = __rpn.pop();\n"])
          .add(push(["__rpn.pop() ", this._operator.compile(data), " ", temp]));
      }
    };
     
    exports.Operator.prototype.compile = function (data) {
      if (this.symbol === "print") {
        return this._sn(data.originalFilename,
                        "__rpn.print");
      }
      else {
        return this._sn(data.originalFilename,
                        this.symbol);
      }
    };

    Gluing it Together

    From here we have done all the difficult work, and we can run a victory lap by connecting the modules together with a public API, and by creating a command line script to call the compiler.

    The public API resides in lib/rpn.js. It also contains the preamble, to initialize __rpn:

    var jison = require("jison");
    var sourceMap = require("source-map");
    var lex = require("./rpn/lex").lex;
    var bnf = require("./rpn/bnf").bnf;
     
    var parser = new jison.Parser({
      lex: lex,
      bnf: bnf
    });
     
    parser.yy = require("./rpn/ast");
     
    function getPreamble () {
      return new sourceMap.SourceNode(null, null, null, "")
        .add("var __rpn = {};\n")
        .add("__rpn._stack = [];\n")
        .add("__rpn.temp = 0;\n")
     
        .add("__rpn.push = function (val) {\n")
        .add("  __rpn._stack.push(val);\n")
        .add("};\n")
     
        .add("__rpn.pop = function () {\n")
        .add("  if (__rpn._stack.length > 0) {\n")
        .add("    return __rpn._stack.pop();\n")
        .add("  }\n")
        .add("  else {\n")
        .add("    throw new Error('can\\\'t pop from empty stack');\n")
        .add("  }\n")
        .add("};\n")
     
        .add("__rpn.print = function (val, repeat) {\n")
        .add("  while (repeat-- > 0) {\n")
        .add("    var el = document.createElement('div');\n")
        .add("    var txt = document.createTextNode(val);\n")
        .add("    el.appendChild(txt);\n")
        .add("    document.body.appendChild(el);\n")
        .add("  }\n")
        .add("};\n");
    }
     
    exports.compile = function (input, data) {
      var expressions = parser.parse(input.toString());
      var preamble = getPreamble();
     
      var result = new sourceMap.SourceNode(null, null, null, preamble);
      result.add(expressions.map(function (exp) {
        return exp.compile(data);
      }));
     
      return result;
    };

    Create the command line script in bin/rpn.js:

    #!/usr/bin/env node
    var fs = require("fs");
    var rpn = require("rpn");
     
    process.argv.slice(2).forEach(function (file) {
      var input = fs.readFileSync(file);
      var output = rpn.compile(input, {
        originalFilename: file
      }).toStringWithSourceMap({
        file: file.replace(/\.[\w]+$/, ".js.map")
      });
      var sourceMapFile = file.replace(/\.[\w]+$/, ".js.map");
      fs.writeFileSync(file.replace(/\.[\w]+$/, ".js"),
                       output.code + "\n//# sourceMappingURL=" + sourceMapFile);
      fs.writeFileSync(sourceMapFile, output.map);
    });

    Note that our script will automatically add the //# sourceMappingURL comment directive so that the browser’s debugger knows where to find the source map.

    After you create the script, update your package.json:

    ...
    "bin": {
      "rpn.js": "./bin/rpn.js"
    },
    ...

    And link the package again so that the script is installed on your system:

    $ npm link .

    Seeing Results

    Here is an RPN program that we can use to test our compiler. I have saved it in examples/simple-example.rpn:

    a 8 =;
    b 3 =;
    c a b 1 - / =;
    c 1 print;

    Next, compile the script:

    $ cd examples/
    $ rpn.js simple-example.rpn

    This generates simple-example.js and simple-example.js.map. When we include the JavaScript file in a web page we should see the result of the computation printed on the page:

    Screenshot of simple-example.rpn's result

    Great success!

    However, we aren’t always so lucky, and our arithmetic might have some errors. Consider the following example, examples/with-error.rpn:

    a 9 =;
    b 3 =;
    c a b / =;
    c a b c - / =;
    c 1 print;

    We can compile this script and include the resulting JavaScript in a web page, but this time we won’t see any output on the page.

    By opening the debugger, setting the pause on exceptions option, and reloading, we can see how daunting debugging without source maps can be:

    Screenshot of enabling pause on exceptions.

    Screenshot of debugging with-error.rpn without source maps.

    The generated JavaScript is difficult to read, and unfamiliar to anyone who authored the original RPN script. By enabling source maps in the debugger, we can refresh and the exact line where the error ocurred in our original source will be highlighted:

    Screenshot of enabling source maps.


    Screenshot of debugging with-error.rpn with source maps.

    The debugging experience with source maps is orders of magnitude improved, and makes compiling languages to JavaScript a serious possibility.

    At the end of the day though, the debugging experience is only as good as the information encoded in the source maps by your compiler. It can be hard to judge the quality of your source maps simply by looking at the set of source location coordinates that they are mapping between, so Tobias Koppers created a tool to let you easily visualize your source maps.

    Here is the visualization of one of our source maps:


    Screenshot of the source map visualization tool.

    Good luck writing your own compiler that targets JavaScript!

    References

  9. Detecting touch: it’s the ‘why’, not the ‘how’

    One common aspect of making a website or application “mobile friendly” is the inclusion of tweaks, additional functionality or interface elements that are particularly aimed at touchscreens. A very common question from developers is now “How can I detect a touch-capable device?”

    Feature detection for touch

    Although there used to be a few incompatibilities and proprietary solutions in the past (such as Mozilla’s experimental, vendor-prefixed event model), almost all browsers now implement the same Touch Events model (based on a solution first introduced by Apple for iOS Safari, which subsequently was adopted by other browsers and retrospectively turned into a W3C draft specification).

    As a result, being able to programmatically detect whether or not a particular browser supports touch interactions involves a very simple feature detection:

    if ('ontouchstart' in window) {
      /* browser with Touch Events
         running on touch-capable device */
    }

    This snippet works reliably in modern browser, but older versions notoriously had a few quirks and inconsistencies which required jumping through various different detection strategy hoops. If your application is targetting these older browsers, I’d recommend having a look at Modernizr – and in particular its various touch test approaches – which smooths over most of these issues.

    I noted above that “almost all browsers” support this touch event model. The big exception here is Internet Explorer. While up to IE9 there was no support for any low-level touch interaction, IE10 introduced support for Microsoft’s own Pointer Events. This event model – which has since been submitted for W3C standardisation – unifies “pointer” devices (mouse, stylus, touch, etc) under a single new class of events. As this model does not, by design, include any separate ‘touch’, the feature detection for ontouchstart will naturally not work. The suggested method of detecting if a browser using Pointer Events is running on a touch-enabled device instead involves checking for the existence and return value of navigator.maxTouchPoints (note that Microsoft’s Pointer Events are currently still vendor-prefixed, so in practice we’ll be looking for navigator.msMaxTouchPoints). If the property exists and returns a value greater than 0, we have touch support.

    if (navigator.msMaxTouchPoints > 0) {
      /* IE with pointer events running
         on touch-capable device */
    }

    Adding this to our previous feature detect – and also including the non-vendor-prefixed version of the Pointer Events one for future compatibility – we get a still reasonably compact code snippet:

    if (('ontouchstart' in window) ||
         (navigator.maxTouchPoints > 0) ||
         (navigator.msMaxTouchPoints > 0)) {
          /* browser with either Touch Events of Pointer Events
             running on touch-capable device */
    }

    How touch detection is used

    Now, there are already quite a few commonly-used techniques for “touch optimisation” which take advantage of these sorts of feature detects. The most common use cases for detecting touch is to increase the responsiveness of an interface for touch users.

    When using a touchscreen interface, browsers introduce an artificial delay (in the range of about 300ms) between a touch action – such as tapping a link or a button – and the time the actual click event is being fired.

    More specifically, in browsers that support Touch Events the delay happens between touchend and the simulated mouse events that these browser also fire for compatibility with mouse-centric scripts:

    touchstart > [touchmove]+ > touchend > delay > mousemove > mousedown > mouseup > click

    See the event listener test page to see the order in which events are being fired, code available on GitHub.

    This delay has been introduced to allow users to double-tap (for instance, to zoom in/out of a page) without accidentally activating any page elements.

    It’s interesting to note that Firefox and Chrome on Android have removed this delay for pages with a fixed, non-zoomable viewport.

    <meta name="viewport" value="... user-scalable = no ...">

    See the event listener with user-scalable=no test page, code available on GitHub.

    There is some discussion of tweaking Chrome’s behavior further for other situations – see issue 169642 in the Chromium bug tracker.

    Although this affordance is clearly necessary, it can make a web app feel slightly laggy and unresponsive. One common trick has been to check for touch support and, if present, react directly to a touch event (either touchstart – as soon as the user touches the screen – or touchend – after the user has lifted their finger) instead of the traditional click:

    /* if touch supported, listen to 'touchend', otherwise 'click' */
    var clickEvent = ('ontouchstart' in window ? 'touchend' : 'click');
    blah.addEventListener(clickEvent, function() { ... });

    Although this type of optimisation is now widely used, it is based on a logical fallacy which is now starting to become more apparent.

    The artificial delay is also present in browsers that use Pointer Events.

    pointerover > mouseover > pointerdown > mousedown > pointermove > mousemove > pointerup > mouseup > pointerout > mouseout > delay > click

    Although it’s possible to extend the above optimisation approach to check navigator.maxTouchPoints and to then hook up our listener to pointerup rather than click, there is a much simpler way: setting the touch-action CSS property of our element to none eliminates the delay.

    /* suppress default touch action like double-tap zoom */
    a, button {
      -ms-touch-action: none;
          touch-action: none;
    }

    See the event listener with touch-action:none test page, code available on GitHub.

    False assumptions

    It’s important to note that these types of optimisations based on the availability of touch have a fundamental flaw: they make assumptions about user behavior based on device capabilities. More explicitly, the example above assumes that because a device is capable of touch input, a user will in fact use touch as the only way to interact with it.

    This assumption probably held some truth a few years back, when the only devices that featured touch input were the classic “mobile” and “tablet”. Here, touchscreens were the only input method available. In recent months, though, we’ve seen a whole new class of devices which feature both a traditional laptop/desktop form factor (including a mouse, trackpad, keyboard) and a touchscreen, such as the various Windows 8 machines or Google’s Chromebook Pixel.

    As an aside, even in the case of mobile phones or tablets, it was already possible – on some platforms – for users to add further input devices. While iOS only caters for pairing an additional bluetooth keyboard to an iPhone/iPad purely for text input, Android and Blackberry OS also let users add a mouse.

    On Android, this mouse will act exactly like a “touch”, even firing the same sequence of touch events and simulated mouse events, including the dreaded delay in between – so optimisations like our example above will still work fine. Blackberry OS, however, purely fires mouse events, leading to the same sort of problem outlined below.

    The implications of this change are slowly beginning to dawn on developers: that touch support does not necessarily mean “mobile” anymore, and more importantly that even if touch is available, it may not be the primary or exclusive input method that a user chooses. In fact, a user may even transition between any of their available input methods in the course of their interaction.

    The innocent code snippets above can have quite annoying consequences on this new class of devices. In browsers that use Touch Events:

    var clickEvent = ('ontouchstart' in window ? 'touchend' : 'click');

    is basically saying “if the device support touch, only listen to touchend and not click” – which, on a multi-input device, immediately shuts out any interaction via mouse, trackpad or keyboard.

    Touch or mouse?

    So what’s the solution to this new conundrum of touch-capable devices that may also have other input methods? While some developers have started to look at complementing a touch feature detection with additional user agent sniffing, I believe that the answer – as in so many other cases in web development – is to accept that we can’t fully detect or control how our users will interact with our web sites and applications, and to be input-agnostic. Instead of making assumptions, our code should cater for all eventualities. Specifically, instead of making the decision about whether to react to click or touchend/touchstart mutually exclusive, these should all be taken into consideration as complementary.

    Certainly, this may involve a bit more code, but the end result will be that our application will work for the largest number of users. One approach, already familiar to developers who’ve strived to make their mouse-specific interfaces also work for keyboard users, would be to simply “double up” your event listeners (while taking care to prevent the functionality from firing twice by stopping the simulated mouse events that are fired following the touch events):

    blah.addEventListener('touchend', function(e) {
      /* prevent delay and simulated mouse events */
      e.preventDefault();
      someFunction()
    });
    blah.addEventListener('click', someFunction);

    If this isn’t DRY enough for you, there are of course fancier approaches, such as only defining your functions for click and then bypassing the dreaded delay by explicitly firing that handler:

    blah.addEventListener('touchend', function(e) {
      /* prevent delay and simulated mouse events */
      e.preventDefault();
      /* trigger the actual behavior we bound to the 'click' event */
      e.target.click();
    })
    blah.addEventListener('click', function() {
      /* actual functionality */
    });

    That last snippet does not cover all possible scenarios though. For a more robust implementation of the same principle, see the FastClick script from FT labs.

    Being input-agnostic

    Of course, battling with delay on touch devices is not the only reason why developers want to check for touch capabilities. Current discussions – such as this issue in Modernizr about detecting a mouse user – now revolve around offering completely different interfaces to touch users, compared to mouse or keyboard, and whether or not a particular browser/device supports things like hovering. And even beyond JavaScript, similar concepts (pointer and hover media features) are being proposed for Media Queries Level 4. But the principle is still the same: as there are now common multi-input devices, it’s not straightforward (and in many cases, impossible) anymore to determine if a user is on a device that exclusively supports touch.

    The more generic approach taken in Microsoft’s Pointer Events specification – which is already being scheduled for implementation in other browser such as Chrome – is a step in the right direction (though it still requires extra handling for keyboard users). In the meantime, developers should be careful not to draw the wrong conclusions from touch support detection and avoid unwittingly locking out a growing number of potential multi-input users.

    Further links

  10. Serving Backbone for Robots & Legacy Browsers

    I like the Single Page Application model and Backbone.js, because I get it. As a former Java developer, I am used to object oriented coding and events for messaging. Within our HTML5 consultancy, SC5, Backbone has become almost a synonym for single page applications, and it is easy to move between projects because everybody gets the same basic development model.

    We hate the fact that we need to have server side workarounds for robots. Making applications crawlable is very reasonable business-wise, but ill-suited for the SPA model. Data-driven single page applications typically get only served a HTML page skeleton, and the actual construction of all the visual elements is done in browser. Any other way would easily lead into double code paths (one on a browser, one on a server). Some have even concerned on giving up the SPA model and moving the logic and representation back to the server.

    Still, we should not let the tail wag the dog. Why sacrifice the user experience of 99,9% of the users for the sake of the significant 0.1%? Instead, for such low traffic, a better suited solution would be to create a server side workaround.

    Solving the Crawling Problem with an App Proxy

    The obvious solution for the problem is running the same application code at the both ends. Like in the digital television transformation, a set-top box would fill in the gap of legacy televisions by crunching the digital signal into analog form. Correspondingly, a proxy would run the application server side and serve the resulting HTML back to the crawlers. Smart browsers would get all the interactive candy, whereas crawlers and legacy browsers would just get the pre-processed HTML document.

    Proxy pattern explained through a TV set metaphor

    Thanks to node.js, JavaScript developers have been able to use their favourite language on the both ends for some time already, and proxy-like solutions have become a plausible option.

    Implementing DOM and Browser APIs on the Server

    Single page applications typically heavily depend on DOM manipulation. Typical server applications combine several view templates into a page through concatenation, whereas Backbone applications append the views into DOM as new elements. Developer would either need to emulate DOM on the server side, or build an abstraction layer that would permit using DOM on the browser and template concatenation on the server. DOM can either be serialized into a HTML document or vice versa, but these techniques cannot be easily mixed runtime.

    A typical Backbone application talks with the browser APIs through several different layers – either by using Backbone or jQuery APIs, or accessing the APIs directly. Backbone itself has only minor dependencies to layers below – jQuery is used in DOM manipulation and AJAX requests, and application state handling is done using pushState.

    Sample Backbone layers

    Node.js has ready-made modules for each level of abstraction: JSDOM offers a full DOM implementation on the server-side, whereas Cheerio provides a jQuery API on top of a fake DOM with a better performance. Some of the other server-side Backbone implementations, like AirBnB Rendr and Backbone.LayoutManager, set the abstraction level to the level of Backbone APIs (only), and hide the actual DOM manipulation under a set of conventions. Actually, Backbone.LayoutManager does offer the jQuery API through Cheerio, but the main purpose of the library itself is to ease the juggling between Backbone layouts, and hence promote a higher level of abstraction.

    Introducing backbone-serverside

    Still, we went for our own solution. Our team is a pack of old dogs that do not learn new tricks easily. We believe there is no easy way of fully abstracting out the DOM without changing what Backbone applications essentially are. We like our Backbone applications without extra layers, and jQuery has always served us as a good compatibility layer to defend ourselves against browser differences in DOM manipulation. Like Backbone.LayoutManager, we choose Cheerio as our jQuery abstraction. We solved the Backbone browser API dependencies by overriding Backbone.history and Backbone.ajax with API compatible replacements. Actually, in the first draft version, these implementations remain bare minimum stubs.

    We are quite happy about the solution we have in the works. If you study the backbone-serverside example, it looks quite close to what a typical Backbone application might be. We do not enforce working on any particular level of abstraction; you can use either Backbone APIs or the subset of APIs that jQuery offers. If you want to go deeper, nothing stops from implementing server-side version of a browser API. Insuch cases, the actual server side implementation may be a stub. For example, needs touch event handling on the server?

    The current solution assumes a node.js server, but it does not necessarily mean drastic changes to an existing server stack. An existing servers for API and static assets can remain as-is, but there should be a proxy to forward the requests of dumb clients to our server. The sample application serves static files, API and the proxy from the same server, but they all could be decoupled with small modifications.

    backbone-serverside as a proxy

    Writing Apps That Work on backbone-serverside

    Currently the backbone-serverside core is a bare minimum set of adapters to make Backbone run on node.js. Porting your application to run on server may require further modifications.

    If the application does not already utilise a module loader, such as RequireJS or Browserify, you need to figure out on how to load the same modules on the server. In our example below, we use RequireJS and need a bit JavaScript to use Cheerio instead of vanilla jQuery on the server. Otherwise we are pretty able to use the same stack we typically use (jQuery, Underscore/Lo-Dash, Backbone and Handlebars.When choosing the modules, you may need to limit to the ones that do not play with Browser APIs directly, or be prepared to write a few stubs by yourself.

    // Compose RequireJS configuration run-time by determining the execution
    // context first. We may pass different values to browser and server.
    var isBrowser = typeof(window) !== 'undefined';
     
    // Execute this for RequireJS (client or server-side, no matter which)
    requirejs.config({
     
        paths: {
            text: 'components/requirejs-text/text',
            underscore: 'components/lodash/dist/lodash.underscore',
            backbone: 'components/backbone/backbone',
            handlebars: 'components/handlebars/handlebars',
            jquery: isBrowser ? 'components/jquery/jquery' : 'emptyHack'
        },
     
        shim: {
            'jquery': {
                deps: ['module'],
                exports: 'jQuery',
                init: function (module) {
                    // Fetch the jQuery adapter parameters for server case
                    if (module && module.config) {
                        return module.config().jquery;
                    }
     
                    // Fallback to browser specific thingy
                    return this.jQuery.noConflict();
                }
            },
            'underscore': {
                exports: '_',
                init: function () {
                    return this._.noConflict();
                }
            },
            'backbone': {
                deps: ['underscore', 'jquery'],
                exports: 'Backbone',
                init: function (_, $) {
                    // Inject adapters when in server
                    if (!isBrowser) {
                        var adapters = require('../..');
                        // Add the adapters we're going to be using
                        _.extend(this.Backbone.history,
                            adapters.backbone.history);
                        this.Backbone.ajax = adapters.backbone.ajax;
                        Backbone.$ = $;
                    }
     
                    return this.Backbone.noConflict();
                }
            },
            'handlebars': {
                exports: 'Handlebars',
                init: function() {
                    return this.Handlebars;
                }
            }
        },
     
        config: {
            // The API endpoints can be passed via URLs
            'collections/items': {
                // TODO Use full path due to our XHR adapter limitations
                url: 'http://localhost:8080/api/items'
            }
        }
    });

    Once the configuration works alright, the application can be bootstrapped normally. In the example, we use Node.js express server stack and pass specific request paths to Backbone Router implementation for handling. When done, we will serialize the DOM into text and send that to the client. Some extra code needs to be added to deal with Backbone asynchronous event model. We will discuss that more thoroughly below.

    // URL Endpoint for the 'web pages'
    server.get(/\/(items\/\d+)?$/, function(req, res) {
        // Remove preceeding '/'
        var path = req.path.substr(1, req.path.length);
        console.log('Routing to \'%s\'', path);
     
        // Initialize a blank document and a handle to its content
        //app.router.initialize();
     
        // If we're already on the current path, just serve the 'cached' HTML
        if (path === Backbone.history.path) {
            console.log('Serving response from cache');
            res.send($html.html());
        }
     
        // Listen to state change once - then send the response
        app.router.once('done', function(router, status) {
            // Just a simple workaround in case we timeouted or such
            if (res.headersSent) {
                console.warn('Could not respond to request in time.');
            }
     
            if (status === 'error') {
                res.send(500, 'Our framework blew it. Sorry.');
            }
            if (status === 'ready') {
                // Set the bootstrapped attribute to communicate we're done
                var $root = $html('#main');
                $root.attr('data-bootstrapped', true);
     
                // Send the changed DOM to the client
                console.log('Serving response');
                res.send($html.html());
            }
        });
     
        // Then do the trick that would cause the state change
        Backbone.history.navigate(path, { trigger: true });
    });

    Dealing with Application Events and States

    Backbone uses an asynchronous, event-driven model for communicating between the models views and other objects. For an object oriented developer, the model is fine, but it causes a few headaches on node.js. After all, Backbone applications are data driven; pulling data from a remote API endpoint may take seconds, and once it eventually arrives, the models will notify views to repaint themselves. There is no easy way to know when all the application DOM manipulation is finished, so we needed to invent our own mechanism.

    In our example we utilise simple state machines to solve the problem. Since the simplified example does not have a separate application singleton class, we use a router object as the single point of control. Router listens for changes in states of each view, and only notifies the express server about readiness to render when all the views are ready. In the beginning of the request, router resets the view states to pending and does not notify the browser or server until it knows all the views are done. Correspondingly, the views do not claim to be done until they know they are fed with valid data from their corresponding model/collection. The state machine is simple and can be consistently applied throughout the different Backbone objects.

    Activity diagram of a Backbone app event flow

    Beyond the Experimental Hack

    The current version is still experimental work, but it proves Backbone applications can happily live on the server without breaking Backbone APIs or introducing too many new conventions. Currently in SC5 we have a few projects starting that could utilise the this implementation, so we will
    continue the effort.

    We believe the web stack community benefits from this effort, thus we have published the work in GitHub. It is far from being finished and we would appreciate all community continueributions in the forms of ideas and code. Share the love, criticism and all in between: @sc5io #backboneserverside.

    Particularly,we plan to change and hope to get contributions for the following:

    • The current example will likely misbehave on concurrent requests. It shares a single DOM representation for all the ongoing requests, which can easily mess up each other.
    • The state machine implementation is just one idea on how to determine when to serialize the DOM back to the client. It likely can be drastically simplified for most use cases, and it is quite possible to find a better generic solution.
    • The server-side route handling is naive. To emphasize that only the crawlers and legacy browsers might need server-side rendering, the sample could use projects like express-device to detect if we are serving a legacy browser or a server.
    • The sample application is a very rudimentary master-details view application and will not likely cause any wow effect. It needs a little bit of love.

    We encourage you to fork the repository and start from modifying the example for your needs. Happy Hacking!