Mozilla

JavaScript Articles

Sort by:

View:

  1. The Making of the Time Out Firefox OS app

    A rash start into adventure

    So we told our client that yes, of course, we would do their Firefox OS app. We didn’t know much about FFOS at the time. But, hey, we had just completed refactoring their native iOS and Android apps. Web applications were our core business all along. So what was to be feared?

    More than we thought, it turned out. Some of the dragons along the way we fought and defeated ourselves. At times we feared that we wouldn’t be able to rescue the princess in time (i.e. before MWC 2013). But whenever we got really lost in detail forest, the brave knights from Mozilla came to our rescue. In the end, it all turned out well and the team lived happily ever after.

    But here’s the full story:

    Mission & challenge

    Just like their iOS and Android apps, Time Out‘s new Firefox OS app was supposed to allow browsing their rich content on bars, restaurants, things to do and more by category, area, proximity or keyword search, patient zero being Barcelona. We would need to show results as illustrated lists as well as visually on a map and have a decent detail view, complete with ratings, access details, phone button and social tools.

    But most importantly, and in addition to what the native apps did, this app was supposed to do all of that even when offline.

    Oh, and there needed to be a presentable, working prototype in four weeks time.

    Cross-platform reusability of the code as a mobile website or as the base of HTML5 apps on other mobile platforms was clearly prio 2 but still to be kept in mind.

    The princess was clearly in danger. So we arrested everyone on the floor that could possibly be of help and locked them into a room to get the basics sorted out. It quickly emerged that the main architectural challenges were that

    • we had a lot of things to store on the phone, including the app itself, a full street-level map of Barcelona, and Time Out’s information on every venue in town (text, images, position & meta info),
    • at least some of this would need to be loaded from within the app; once initially and synchronizable later,
    • the app would need to remain interactively usable during these potentially lengthy downloads, so they’d need to be asynchronous,
    • whenever the browser location changed, this would be interrupted

    In effect, all the different functionalities would have to live within one single HTML document.

    One document plus hash tags

    For dynamically rendering, changing and moving content around as required in a one-page-does-all scenario, JavaScript alone didn’t seem like a wise choice. We’d been warned that Firefox OS was going to roll out on a mix of devices including the very low cost class, so it was clear that fancy transitions of entire full-screen contents couldn’t be orchestrated through JS loops if they were to happen smoothly.

    On the plus side, there was no need for JS-based presentation mechanics. With Firefox OS not bringing any graveyard of half-dead legacy versions to cater to, we could (finally!) rely on HTML5 and CSS3 alone and without fallbacks. Even beyond FFOS, the quick update cycles in the mobile environment didn’t seem to block the path for taking a pure CSS3 approach further to more platforms later.

    That much being clear, which better place to look for best practice examples than Mozilla Hacks? After some digging, Thomas found Hacking Firefox OS in which Luca Greco describes the use of fragment identifiers (aka hashtags) appended to the URL to switch and transition content via CSS alone, which we happily adopted.

    Another valuable source of ideas was a list of GAIA building blocks on Mozilla’s website, which has since been replaced by the even more useful Building Firefox OS site.

    In effect, we ended up thinking in terms of screens. Each physically a <div>, whose visibility and transitions are governed by :target CSS selectors that draw on the browser location’s hashtag. Luckily, there’s also the onHashChange event that we could additionally listen to in order to handle the app-level aspects of such screen changes in JavaScript.

    Our main HTML and CSS structure hence looked like this:

    And a menu

    We modeled the drawer menu very similarily, just that it sits in a <nav> element on the same level as the <section> container holding all the screens. Its activation and deactivation works by catching the menu icon clicks, then actively changing the screen container’s data-state attribute from JS, which triggers the corresponding CSS3 slide-in / slide-out transition (of the screen container, revealing the menu beneath).

    This served as our “Hello, World!” test for CSS3-based UI performance on low-end devices, plus as a test case for combining presentation-level CSS3 automation with app-level explicit status handling. We took down a “yes” for both.

    UI

    By the time we had put together a dummy around these concepts, the first design mockups from Time Out came in so that we could start to implement the front end and think about connecting it to the data sources.

    For presentation, we tried hard to keep the HTML and CSS to the absolute minimum. Mozilla’s GAIA examples being a very valuable source of ideas once more.

    Again, targeting Firefox OS alone allowed us to break free of the backwards compatibility hell that we were still living in, desktop-wise. No one would ask us Will it display well in IE8? or worse things. We could finally use real <section>, <nav>, <header>, and <menu> tags instead of an army of different classes of <div>. What a relief!

    The clear, rectangular, flat and minimalistic design we got from Time Out also did its part to keep the UI HTML simple and clean. After we were done with creating and styling the UI for 15 screens, our HTML had only ~250 lines. We later improved that to 150 while extending the functionality, but that’s a different story.

    Speaking of styling, not everything that had looked good on desktop Firefox even in its responsive design view displayed equally well on actual mobile devices. Some things that we fought with and won:

    Scale: The app looked quite different when viewed on the reference device (a TurkCell branded ZTE device that Mozilla had sent us for testing) and on our brand new Nexus 4s:

    After a lot of experimenting, tearing some hair and looking around how others had addressed graceful, proportional scaling for a consistent look & feel across resolutions, we stumbled upon this magic incantation:

    <meta name="viewport" content="user-scalable=no, initial-scale=1,
    maximum-scale=1, width=device-width" />

    What it does, to quote an article at Opera, is to tell the browser that there is “No scaling needed, thank you very much. Just make the viewport as many pixels wide as the device screen width”. It also prevents accidental scaling while the map is zoomed. There is more information on the topic at MDN.

    Then there are things that necessarily get pixelated when scaled up to high resolutions, such as the API based venue images. Not a lot we could do about that. But we could at least make the icons and logo in the app’s chrome look nice in any resolution by transforming them to SVG.

    Another issue on mobile devices was that users have to touch the content in order to scroll it, so we wanted to prevent the automatic highlighting that comes with that:

    li, a, span, button, div
    {
        outline:none;
        -moz-tap-highlight-color: transparent;
        -moz-user-select: none;
        -moz-user-focus:ignore
    }

    We’ve since been warned that suppressing the default highlighting can be an issue in terms of accessibility, so you might wanted to consider this carefully.

    Connecting to the live data sources

    So now we had the app’s presentational base structure and the UI HTML / CSS in place. It all looked nice with dummy data, but it was still dead.

    Trouble with bringing it to life was that Time Out was in the middle of a big project to replace its legacy API with a modern Graffiti based service and thus had little bandwidth for catering to our project’s specific needs. The new scheme was still prototypical and quickly evolving, so we couldn’t build against it.

    The legacy construct already comprised a proxy that wrapped the raw API into something more suitable for consumption by their iOS and Android apps, but after close examination we found that we better re-re-wrap that on the fly in PHP for a couple of purposes:

    • Adding CORS support to avoid XSS issues, with the API and the app living in different subdomains of timeout.com,
    • stripping API output down to what the FFOS app really needed, which we could see would reduce bandwidth and increase speed by magnitude,
    • laying the foundation for harvesting of API based data for offline use, which we already knew we’d need to do later

    As an alternative to server-side CORS support, one could also think of using the SystemXHR API. It is a mighty and potentially dangerous tool however. We also wanted to avoid any needless dependency on FFOS-only APIs.

    So while the approach wasn’t exactly future proof, it helped us a lot to get to results quickly, because the endpoints that the app was calling were entirely of our own choice and making, so that we could adapt them as needed without time loss in communication.

    Populating content elements

    For all things dynamic and API-driven, we used the same approach at making it visible in the app:

    • Have a simple, minimalistic, empty, hidden, singleton HTML template,
    • clone that template (N-fold for repeated elements),
    • ID and fill the clone(s) with API based content.
    • For super simple elements, such as <li>s, save the cloning and whip up the HTML on the fly while filling.

    As an example, let’s consider the filters for finding venues. Cuisine is a suitable filter for restaurants, but certainly not for museums. Same is true for filter values. There are vegetarian restaurants in Barcelona, but certainly no vegetarian bars. So the filter names and lists of possible values need to be asked of the API after the venue type is selected.

    In the UI, the collapsible category filter for bars & pubs looks like this:

    The template for one filter is a direct child of the one and only

    <div id="templateContainer">

    which serves as our central template repository for everything cloned and filled at runtime and whose only interesting property is being invisible. Inside it, the template for search filters is:

    <div id="filterBoxTemplate">
      <span></span>
      <ul></ul>
    </div>

    So for each filter that we get for any given category, all we had to do was to clone, label, and then fill this template:

    $('#filterBoxTemplate').clone().attr('id', filterItem.id).appendTo(
    '#categoryResultScreen .filter-container');
    ...
    $("#" + filterItem.id).children('.filter-button').html(
    filterItem.name);

    As you certainly guessed, we then had to to call the API once again for each filter in order to learn about its possible values, which were then rendered into <li> elements within the filter‘s <ul> on the fly:

    $("#" + filterId).children('.filter_options').html(
    '<li><span>Loading ...</span></li>');
    
    apiClient.call(filterItem.api_method, function (filterOptions)
    {
      ...
      $.each(filterOptions, function(key, option)
      {
        var entry = $('<li filterId="' + option.id + '"><span>'
          + option.name + '</span></li>');
    
        if (selectedOptionId && selectedOptionId == filterOptionId)
        {
          entry.addClass('filter-selected');
        }
    
        $("#" + filterId).children('.filter_options').append(entry);
      });
    ...
    });

    DOM based caching

    To save bandwidth and increase responsiveness in on-line use, we took this simple approach a little further and consciously stored more application level information in the DOM than needed for the current display if that information was likely needed in the next step. This way, we’d have easy and quick local access to it without calling – and waiting for – the API again.

    The technical way we did so was a funny hack. Let’s look at the transition from the search result list to the venue detail view to illustrate:

    As for the filters above, the screen class for the detailView has an init() method that populates the DOM structure based on API input as encapsulated on the application level. The trick now is, while rendering the search result list, to register anonymous click handlers for each of its rows, which – JavaScript passing magic – contain a copy of, rather than a reference to, the venue objects used to render the rows themselves:

    renderItems: function (itemArray)
    {
      ...
    
      $.each(itemArray, function(key, itemData)
      {
        var item = screen.dom.resultRowTemplate.clone().attr('id',
          itemData.uid).addClass('venueinfo').click(function()
        {
          $('#mapScreen').hide();
          screen.showDetails(itemData);
        });
    
        $('.result-name', item).text(itemData.name);
        $('.result-type-label', item).text(itemData.section);
        $('.result-type', item).text(itemData.subSection);
    
        ...
    
        listContainer.append(item);
      });
    },
    
    ...
    
    showDetails: function (venue)
    {
      require(['screen/detailView'], function (detailView)
      {
        detailView.init(venue);
      });
    },

    In effect, there’s a copy of the data for rendering each venue’s detail view stored in the DOM. But neither in hidden elements nor in custom attributes of the node object, but rather conveniently in each of the anonymous pass-by-value-based click event handlers for the result list rows, with the added benefit that they don’t need to be explicitly read again but actively feed themselves into the venue details screen as soon a row receives a touch event.

    And dummy feeds

    Finishing the app before MWC 2013 was pretty much a race against time, both for us and for Time Out’s API folks, who had an entirely different and equally – if not more so – sportive thing to do. Therefore they had very limited time for adding to the (legacy) API that we were building against. For one data feed, this meant that we had to resort to including static JSON files into the app’s manifest and distribution; then use relative, self-referencing URLs as fake API endpoints. The illustrated list of top venues on the app’s main screen was driven this way.

    Not exactly nice, but much better than throwing static content into the HTML! Also, it kept the display code already fit for switching to the dynamic data source that eventually materialized later, and compatible with our offline data caching strategy.

    As the lack of live data on top venues then extended right to their teaser images, we made the latter physically part of the JSON dummy feed. In Base64 :) But even the low-end reference device did a graceful job of handling this huge load of ASCII garbage.

    State preservation

    We had a whopping 5M of local storage to spam, and different plans already (as well as much higher needs) for storing the map and application data for offline use. So what to do with this liberal and easily accessed storage location? We thought we could at least preserve the current application state here, so you’d find the app exactly as you left it when you returned to it.

    Map

    A city guide is the very showcase of an app that’s not only geo aware but geo centric. Maps fit for quick rendering and interaction in both online and offline use were naturally a paramount requirement.

    After looking around what was available, we decided to go with Leaflet, a free, easy to integrate, mobile friendly JavaScript library. It proved to be really flexible with respect to both behaviour and map sources.

    With its support for pinching, panning and graceful touch handling plus a clean and easy API, Leaflet made us arrive at a well-usable, decent-looking map with moderate effort and little pain:

    For a different project, we later rendered the OSM vector data for most of Europe into terabytes of PNG tiles in cloud storage using on-demand cloud power. Which we’d recommend as an approach if there’s a good reason not to rely on 3rd party hosted apps, as long as you don’t try this at home; Moving the tiles may well be slower and more costly than their generation.

    But as time was tight before the initial release of this app, we just – legally and cautiously(!) – scraped ready-to use OSM tiles off MapQuest.com.

    The packaging of the tiles for offline use was rather easy for Barcelona because about 1000 map tiles are sufficient to cover the whole city area up to street level (zoom level 16). So we could add each tile as a single line into the manifest.appache file. The resulting, fully automatic, browser-based download on first use was only 10M.

    This left us with a lot of lines like

    /mobile/maps/barcelona/15/16575/12234.png
    /mobile/maps/barcelona/15/16575/12235.png
    ...

    in the manifest and wishing for a $GENERATE clause as for DNS zone files.

    As convenient as it may seem to throw all your offline dependencies’ locations into a single file and just expect them to be available as a consequence, there are significant drawbacks to this approach. The article Application Cache is a Douchebag by Jake Archibald summarizes them and some help is given at Html5Rocks by Eric Bidleman.

    We found at the time that the degree of control over the current download state, and the process of resuming the app cache load in case that the initial time users spent in our app didn’t suffice for that to complete was rather tiresome.

    For Barcelona, we resorted to marking the cache state as dirty in Local Storage and clearing that flag only after we received the updateready event of the window.applicationCache object but in the later generalization to more cities, we moved the map away from the app cache altogether.

    Offline storage

    The first step towards offline-readiness was obviously to know if the device was online or offline, so we’d be able to switch the data source between live and local.

    This sounds easier than it was. Even with cross-platform considerations aside, neither the online state property (window.navigator.onLine), the events fired on the <body> element for state changes (“online” and “offline”, again on the <body>), nor the navigator.connection object that was supposed to have the on/offline state plus bandwidth and more, really turned out reliable enough.

    Standardization is still ongoing around all of the above, and some implementations are labeled as experimental for a good reason :)

    We ultimately ended up writing a NetworkStateService class that uses all of the above as hints, but ultimately and very pragmatically convinces itself with regular HEAD requests to a known live URL that no event went missing and the state is correct.

    That settled, we still needed to make the app work in offline mode. In terms of storage opportunities, we were looking at:

    Storage Capacity Updates Access Typical use
    App / app cache, i.e. everything listed in the file that the value of appcache_path in the app‘s webapp.manifest points to, and which is and therefore downloaded onto the device when the app is installed. <= 50M. On other platforms (e.g. iOS/Safari), user interaction required from 10M+. Recommendation from Moziila was to stay <2M. Hard. Requires user interaction / consent, and only wholesale update of entire app possible. By (relative) path HTML, JS, CSS, static assets such as UI icons
    LocalStorage 5M on UTF8-platforms such as FFOS, 2.5M in UTF16, e.g. on Chrome. Details here Anytime from app By name Key-value storage of app status, user input, or entire data of modest apps
    Device Storage (often SD card) Limited only by hardware Anytime from app (unless mounted as UDB drive when cionnected to desktop computer) By path, through Device Storage API Big things
    FileSystem API Bad idea
    Database Unlimited on FFOS. Mileage on other platforms varies Anytime from app Quick and by arbitrary properties Databases :)

    Some aspects of where to store the data for offline operation were decided upon easily, others not so much:

    • the app, i.e. the HTML, JS, CSS, and UI images would go into the app cache
    • state would be maintained in Local Storage
    • map tiles again in the app cache. Which was a rather dumb decision, as we learned later. Barcelona up to zoom level 16 was 10M, but later cities were different. London was >200M and even reduced to max. zoom 15 still worth 61M. So we moved that to Device Storage and added an actively managed download process for later releases.
    • The venue information, i.e. all the names, locations, images, reviews, details, showtimes etc. of the places that Time Out shows in Barcelona. Seeing that we needed lots of space, efficient and arbitrary access plus dynamic updates, this had to to go into the Database. But how?

    The state of affairs across the different mobile HTML5 platforms was confusing at best, with Firefox OS already supporting IndexedDB, but Safari and Chrome (considering earlier versions up to Android 2.x) still relying on a swamp of similar but different sqlite / WebSQL variations.

    So we cried for help and received it, as always when we had reached out to the Mozilla team. This time in the form of a pointer to pouchDB, a JS-based DB layer that at the same time wraps away the different native DB storage engines behind a CouchDB-like interface and adds super easy on-demand synchronization to a remote CouchDB-hosted master DB out there.

    Back last year it still was in pre-alpha state but very usable already. There were some drawbacks, such as the need for adding a shim for WebSql based platforms. Which in turn meant we couldn’t rely on storage being 8 bit clean, so that we had to base64 our binaries, most of all the venue images. Not exactly pouchDB’s fault, but still blowing up the size.

    Harvesting

    The DB platform being chosen, we next had to think how we’d harvest all the venue data from Time Out’s API into the DB. There were a couple of endpoints at our disposal. The most promising for this task was proximity search with no category or other restrictions applied, as we thought it would let us harvest a given city square by square.

    Trouble with distance metrics however being that they produce circles rather than squares. So step 1 of our thinking would miss venues in the corners of our theoretical grid

    while extending the radius to (half the) the grid’s diagonal, would produce redundant hits and necessitate deduplication.

    In the end, we simply searched by proximity to a city center location, paginating through the result indefinitely, so that we could be sure to to encounter every venue, and only once:

    Technically, we built the harvester in PHP as an extension to the CORS-enabled, result-reducing API proxy for live operation that was already in place. It fed the venue information in to the master CouchDB co-hosted there.

    Time left before MWC 2013 getting tight, we didn’t spend much time on a sophisticated data organization and just pushed the venue information into the DB as one table per category, one row per venue, indexed by location.

    This allowed us to support category based and area / proximity based (map and list) browsing. We developed an idea how offline keyword search might be made possible, but it never came to that. So the app simply removes the search icon when it goes offline, and puts it back when it has live connectivity again.

    Overall, the app now

    • supported live operation out of box,
    • checked its synchronization state to the remote master DB on startup,
    • asked, if needed, permission to make the big (initial or update) download,
    • supported all use cases but keyword search when offline.

    The involved components and their interactions are summarized in this diagram:

    Organizing vs. Optimizing the code

    For the development of the app, we maintained the code in a well-structured and extensive source tree, with e.g. each JavaScript class residing in a file of its own. Part of the source tree is shown below:

    This was, however, not ideal for deployment of the app, especially as a hosted Firefox OS app or mobile web site, where download would be the faster, the fewer and smaller files we had.

    Here, Require.js came to our rescue.

    It provides a very elegant way of smart and asynchronous requirement handling (AMD), but more importantly for our purpose, comes with an optimizer that minifies and combines the JS and CSS source into one file each:

    To enable asynchronous dependency management, modules and their requirements must be made known to the AMD API through declarations, essentially of a function that returns the constructor for the class you’re defining.

    Applied to the search result screen of our application, this looks like this:

    define
    (
      // new class being definied
      'screensSearchResultScreen',
    
      // its dependencies
      ['screens/abstractResultScreen', 'app/applicationController'],
    
      // its anonymous constructor
      function (AbstractResultScreen, ApplicationController)
      {
        var SearchResultScreen = $.extend(true, {}, AbstractResultScreen,
        {
          // properties and methods
          dom:
          {
            resultRowTemplate: $('#searchResultRowTemplate'),
            list: $('#search-result-screen-inner-list'),
            ...
          }
          ...
        }
        ...
    
        return SearchResultScreen;
      }
    );

    For executing the optimization step in the build & deployment process, we used Rhino, Mozilla’s Java-based JavaScript engine:

    java -classpath ./lib/js.jar:./lib/compiler.jar
      org.mozilla.javascript.tools.shell.Main ./lib/r.js -o /tmp/timeout-webapp/
      $1_config.js

    CSS bundling and minification is supported, too, and requires just another call with a different config.

    Outcome

    Four weeks had been a very tight timeline to start with, and we had completely underestimated the intricacies of taking HTML5 to a mobile and offline-enabled context, and wrapping up the result as a Marketplace-ready Firefox OS app.

    Debugging capabilities in Firefox OS, especially on the devices themselves, were still at an early stage (compared to clicking about:app-manager today). So the lights in our Cologne office remained lit until pretty late then.

    Having built the app with a clear separation between functionality and presentation also turned out a wise choice when a week before T0 new mock-ups for most of the front end came in :)

    But it was great and exciting fun, we learned a lot in the process, and ended up with some very useful shiny new tools in our box. Often based on pointers from the super helpful team at Mozilla.

    Truth be told, we had started into the project with mixed expectations as to how close to the native app experience we could get. We came back fully convinced and eager for more.

    In the end, we made the deadline and as a fellow hacker you can probably imagine our relief. The app finally even received its 70 seconds of fame, when Jay Sullivan shortly demoed it at Mozilla’s MWC 2013 press conference as a showcase for HTML5′s and Firefox OS’s offline readiness (Time Out piece at 7:50). We were so proud!

    If you want to play with it, you can find the app in the marketplace or go ahead try it online (no offline mode then).

    Since then, the Time Out Firefox OS app has continued to evolve, and we as a team have used the chance to continue to play with and build apps for FFOS. To some degree, the reusable part of this has become a framework in the meantime, but that’s a story for another day..

    We’d like to thank everyone who helped us along the way, especially Taylor Wescoatt, Sophie Lewis and Dave Cook from Time Out, Desigan Chinniah and Harald Kirschner from Mozilla, who were always there when we needed help, and of course Robert Nyman, who patiently coached us through writing this up.

  2. Ember.JS – What it is and why we need to care about it

    This is a guest post by Sourav Lahoti and his thoughts about Ember.js

    Developers increasingly turn to client-side frameworks to simplify development, and there’s a big need for good ones in this area. We see a lot of players in this field, but for lots of functionality and moving parts, very few stand out in particular — Ember.js is one of them.

    So what is Ember.js? Ember.js is a MVC (Model–View–Controller) JavaScript framework which is maintained by the Ember Core Team (including Tom Dale, Yehuda Katz, and others). It helps developers create ambitious single-page web applications that don’t sacrifice what makes the web great: URI semantics, RESTful architecture, and the write-once, run-anywhere trio of HTML, CSS, and JavaScript.

    Why do we need to care

    Ember.js is tightly coupled with the technologies that make up the web today. It doesn’t attempt to abstract that away. Ember.js brings a clean and consistent application development model. If one needs to migrate from HTML to any other technology, Ember.js framework will evolve along with the current trends in web front end technology.

    It makes it very easy to create your own “component” and “template views” that are easy to understand, create and update. Coupled with its consistent way of managing bindings and computed properties, Ember.js does indeed offer much of the boilerplate code that a web framework needs.

    The core concept

    There are some nominal terms that you will find very common when you use ember.js and they form the basics of Ember.js:

    Routes
    A Route object basically represents the state of the application and corresponds to a url.
    Models
    Every route has an associated Model object, containing the data associated with the current state of the application.
    Controllers
    Controllers are used to decorate models with display logic.

    A controller typically inherits from ObjectController if the template is associated with a single model record, or an ArrayController if the template is associated with a list of records.

    Views
    Views are used to add sophisticated handling of user events to templates or to add reusable behavior to a template.
    Components
    Components are a specialized view for creating custom elements that can be easily reused in templates.

    Hands-on with Ember.js

    Data Binding:

    <script type=”text/x-handlebars”>
      <p>
        <label>Insert your name:</label>
        {{input type=”text” value=name}}
      </p>
     
      <p><strong>Echo: {{name}}</strong></p>
    </script>
    App = Ember.Application.create();

    Final result when the user interacts with the web app

    Ember.js does support data binding as we can see in the above example. What we type into the input is bound to name, as is the text after Echo: . When you change the text in one place, it automatically updates everywhere.

    But how does this happen? Ember.js uses Handlebars for two-way data binding. Templates written in handlebars get and set data from their controller. Every time we type something in our input, the name property of our controller is updated. Then, automatically, the template is updated because the bound data changed.

    A simple Visiting card demo using Handlebars

    We can create our own elements by using Handlebars.

    HTML

    <script type="text/x-handlebars">
     
      {{v-card myname=name street-address=address locality=city zip=zipCode email=email}}
     
      <h2 class="subheader">Enter Your information:</h2>
     
      <label>Enter Your Name:</label>
      {{input type="text" value=name}}
     
      <label>Enter Your Address:</label>
      {{input type="text" value=address}}
     
      <label>Enter Your City:</label>
      {{input type="text" value=city}}
     
      <label>Enter Your Zip Code:</label>
      {{input type="text" value=zipCode}}
     
      <label>Enter Your Email address:</label>
      {{input type="text" value=email}}
     
    </script>
     
    <script type="text/x-handlebars" data-template-name="components/v-card">
     
      <ul class="vcard">
        <li class="myname">{{myname}}</li>
        <li class="street-address">{{street-address}}</li>
        <li class="locality">{{locality}}</li>
        <li><span class="state">{{usState}}</span>, <span class="zip">{{zip}}</span></li>
        <li class="email">{{email}}</li>
      </ul>
     
    </script>

    CSS

    .vcard {
      border: 1px solid #dcdcdc;
      max-width: 12em;
      padding: 0.5em;
    }
     
    .vcard li {
      list-style: none;
    }
     
    .vcard .name {
      font-weight: bold;
    }
     
    .vcard .email {
      font-family: monospace;
    }
     
    label {
      display: block;
      margin-top: 0.5em;
    }

    JavaScript

    App = Ember.Application.create();
     
    App.ApplicationController = Ember.Controller.extend({
        name: 'Sourav',
        address: '123 M.G Road.',
        city: 'Kolkata',
        zipCode: '712248',
        email: 'me@me.com'
    });

    The component is defined by opening a new <script type="text/x-handlebars">, and setting its template name using the data-template-name attribute to be components/[NAME].

    We should note that the web components specification requires the name to have a dash in it in order to separate it from existing HTML tags.

    There is much more to it, I have just touched the surface. For more information, feel free to check out the Ember.js Guides.

  3. JavaScriptOO.com, to find what meets your JavaScript needs

    The JavaScript Renaissance

    We all know the major players in JavaScript projects. MV* frameworks like AngularJS, Backbone, and Ember.js are inspiring a whole new breed of client applications. Utility libraries like underscore and lodash simplify constructs once reserved for academic exercise. And of course, the monolithic namespace jQuery is everywhere. The large teams and growing communities behind these projects (a little corporate backing never hurts) are moving forward and providing very solid platforms for developers to build upon. However, they are merely a precursor for the renaissance that is happening in the world of JavaScript right now.

    Enter the micro-libraries, the drop-in replacements, and the “I-Had-No-Idea-JS-Could-Do-That” projects. Thanks to tooling like Grunt, bower, and npm, testing suites like Jasmine and QUnit, and of course the social coding site github; dozens of peer-reviewed and test-driven JavaScript libraries are sprouting up every day. Fresh approaches on everything from the core JavaScript functionality to abstractions of the ridiculously complex are in abundance and expanding the very foundation of the web.

    VerbalExpression lets you write regular expressions in English; Knwl.js is a natural language processor; 140medley is an entire framework in 821 bytes. Want a DOM selector engine other than sizzle? Try micro-selector, nut, zest, qwery, Sly, or Satisfy. Need a templating engine? Try T-Lite, Grips, gloomy, Transparency, dust, hogan.js, Tempo, Plates, Mold, shorttag, doT.js, t.js, Milk, or at least 10 others. Dates got you down? Check out Date-Utils, moment.js, datejs, an.hour.ago, time.js. Route with Pilot, filter images with CamanJS, write games in Crafty, or make a presentation with RevealJS or impress.js.

    Of course, along with this prolific creativity in the JS universe comes some serious overload. A bit of natural selection will eventually get the best of these projects on your radar, but if you want to see the really exciting bits of evolution occurring you have to watch. Constantly.

    JavaScriptOO.com

    Watching constantly is exactly what I do with JavaScriptOO.com. I watch, I lurk, I read, and eventually I find something that really inspires me.

    The elevator pitch for the site is that it is a directory of JavaScript libraries with examples, CDN links, statistics, and sometimes videos about each library.

    Behind the scenes, after sifting through github, twitter, hacker news, pineapple, and an endless stream of sites and finding something exciting, I begin the slow process of adding a library to the site. Slow is a relative term, but for me, in this context, it means anywhere from 30 minutes to a few days. Adding a library to the site is a purposefully manual process that requires I actually spend some time with the library, writing an example for it, categorizing it as best I can, and sometimes even creating a video about it.

    This slow process is a huge bottleneck for updates on JSOO, and boy, do I hear about it. However, it also keeps the site from becoming just a directory of github links and it keeps the single curator excited about maintaining the site.

    Examples and submitting your library

    There are currently 401 405 409 examples on the site… almost one for every day it has been online. There are 79 libraries in the “Needed Examples” section where visitors can submit a gist or fiddle for that library and are encouraged to “include your Twitter handle or any other marketing you may like to, but keep it simple”. Lastly, there is a section for submitting your own library. Not all libraries submitted are added to the site, but they are given immediate priority, and if they are a fit, added to the queue. There is no editorial, no blog, no opinion at all other than hoping every visitor feels like this:

    Beyond the very manual process of adding a library, the site is also a chance for me to experiment with all sorts of tech and see in real time how it performs under a moderate load. Originally launched as a .NET application most of what you see today is running node.js under iisnode using Express w/ Jade templates (moving to doT.js as I write), a gulpjs build process, a homegrown CMS using AngularJS and VB.NET (gasp!), and a Lucene.NET search application in C#.

  4. Gap between asm.js and native performance gets even narrower with float32 optimizations

    asm.js is a simple subset of JavaScript that is very easy to optimize, suitable for use as a compiler target from languages like C and C++. Earlier this year Firefox could run asm.js code at about half of native speed – that is, C++ code compiled by emscripten could run at about half the speed that the same C++ code could run when compiled natively – and we thought that through improvements in both emscripten (which generates asm.js code from C++) and JS engines (that run that asm.js code), it would be possible to get much closer to native speed.

    Since then many speedups have arrived, lots of them small and specific, but there were also a few large features as well. For example, Firefox has recently gained the ability to optimize some floating-point operations so that they are performed using 32-bit floats instead of 64-bit doubles, which provides substantial speedups in some cases as shown in that link. That optimization work was generic and applied to any JavaScript code that happens to be optimizable in that way. Following that work and the speedups it achieved, there was no reason not to add float32 to the asm.js type system so that asm.js code can benefit from it specifically.

    The work to implement that in both emscripten and SpiderMonkey has recently completed, and here are the performance numbers:

    asm1.5b

    Run times are normalized to clang, so lower is better. The red bars (firefox-f32) represent Firefox running on emscripten-generated code using float32. As the graph shows, Firefox with float32 optimizations can run all those benchmarks at around 1.5x slower than native, or better. That’s a big improvement from earlier this year, when as mentioned before things were closer to 2x slower than native. You can also see the specific improvement thanks to float32 optimizations by comparing to the orange bar (firefox) next to it – in floating-point heavy benchmarks like skinning, linpack and box2d, the speedup is very noticeable.

    Another thing to note about those numbers is that not just one native compiler is shown, but two, both clang and gcc. In a few benchmarks, the difference between clang and gcc is significant, showing that while we often talk about “times slower than native speed”, “native speed” is a somewhat loose term, since there are differences between native compilers.

    In fact, on some benchmarks, like box2d, fasta and copy, asm.js is as close or closer to clang than clang is to gcc. There is even one case where asm.js beats clang by a slight amount, on box2d (gcc also beats clang on that benchmark, by a larger amount, so probably clang’s backend codegen just happens to be a little unlucky there).

    Overall, what this shows is that “native speed” is not a single number, but a range. It looks like asm.js on Firefox is very close to that range – that is, while it’s on average slower than clang and gcc, the amount it is slower by is not far off from how much native compilers differ amongst themselves.

    Note that float32 code generation is off by default in emscripten. This is intentional, as while it can both improve performance as well as ensure the proper C++ float semantics, it also increases code size – due to adding Math.fround calls – which can be detrimental in some cases, especially in JavaScript engines not yet supporting Math.fround.

    There are some ways to work around that issue, such as the outlining option which reduces maximum function size. We have some other ideas on ways to improve code generation in emscripten as well, so we’ll be experimenting with those for a while as well as following when Math.fround gets supported in browsers (so far Firefox and Safari do). Hopefully in the not so far future we can enable float32 optimizations by default in emscripten.

    Summary

    In summary, the graph above shows asm.js performance getting yet closer to native speed. While for the reasons just mentioned I don’t recommend that people build with float32 optimizations quite yet – hopefully soon though! – it’s an exciting increase in performance. And even the current performance numbers – 1.5x slower than native, or better – are not the limit of what can be achieved, as there are still big improvements either under way or in planning, both in emscripten and in JavaScript engines.

  5. Ember Inspector on a Firefox near you

    … or Cross-Browser Add-ons for Fun or Profit

    Browser add-ons are clearly an important web browser feature, at least on the desktop platform, and for a long time Firefox was the browser add-on authors’ preferred target. When Google launched Chrome, this trend on the desktop browsers domain was pretty clear, so their browser provides an add-on api as well.

    Most of the Web DevTools we are used to are now directly integrated into our browser, but they were add-ons not so long time ago, and it’s not strange that new web developer tools are born as add-ons.

    Web DevTools (integrated or add-ons) can motivate web developers to change their browser, and then web developers can push web users to change theirs. So, long story short, it would be interesting and useful to create cross-browser add-ons, especially web devtools add-ons (e.g. to preserve the web neutrality).

    With this goal in mind, I chose Ember Inspector as the target for my cross-browser devtool add-ons experiment, based on the following reasons:

    • It belongs to an emerging and interesting web devtools family (web framework devtools)
    • It’s a pretty complex / real world Chrome extension
    • It’s mostly written in the same web framework by its own community
    • Even if it is a Chrome extension, it’s a webapp built from the app sources using grunt
    • Its JavaScript code is organized into modules and Chrome-specific code is mostly isolated in just a couple of those
    • Plan & Run Porting Effort

      Looking into the ember-extension git repository, we see that the add-on is built from its sources using grunt:

      Ember Extension: chrome grunt build process

      The extension communicates between the developer tools panel, the page and the main extension code via message passing:

      Ember Extension: High Level View

      Using this knowledge, planning the port to Firefox was surprisingly easy:

      • Create new Firefox add-on specific code (register a devtool panel, control the inspected tab)
      • Polyfill the communication channel between the ember_debug module (that is injected into the inspected tab) and the devtool ember app (that is running in the devtools panel)
      • Polyfill the missing non-standard inspect function, which open the DOM Inspector on a DOM Element selected by a defined Ember View id
      • Minor tweaks (isolate remaining Chrome and Firefox specific code, fix CSS -webkit prefixed rules)

      In my opinion this port was particularly pleasant to plan thanks to two main design choices:

      • Modular JavaScript sources which helps to keep browser specific code encapsulated into replaceable modules
      • Devtool panel and code injected into the target tab collaborate exchanging simple JSON messages and the protocol (defined by this add-on) is totally browser agnostic

      Most of the JavaScript modules which compose this extension were already browser independent, so the first step was to bootstrap a simple Firefox Add-on and register a new devtool panel.

      Create a new panel into the DevTools is really simple, and there’s some useful docs about the topic in the Tools/DevToolsAPI page (work in progress).

      Register / unregister devtool panel

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/main.js

      Devtool panel definition

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/devtool-panel.js#L26

      Then, moving to the second step, adapt the code used to create the message channels between the devtool panel and injected code running in the target tab, using content scripts and the low level content worker from the Mozilla Add-on SDK, which are well documented on the official guide and API reference:

      EmberInspector - Workers, Content Scripts and Adapters

      DevTool Panel Workers

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/devtool-panel.js

      Inject ember_debug

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/devtool-panel.js

      Finally hook browser specific code needed to activate the DOM Inspector on a defined DOM Element:

      Inspect DOM element request handler

      From https://github.com/tildeio/ember-extension/blob/master/dist_firefox/lib/devtool-panel.js#L178

      Evaluate its features and dive into the exchanged messages

      At this point one could wonder: how much useful is a tool like this?, Do I really need it? etc.

      I must admit that I’ve started and completed this port without being an experienced EmberJS developer, but to be able to check if all the original features were working correctly on Firefox and to really understand how this browser add-on helps EmberJS developers during app development/debugging phases (its most important use cases), I’ve started to experiment with EmberJS and I have to say that EmberJS is a very pleasant framework to work with and Ember Inspector is a really important tool to put into our tool belts.

      I’m pretty sure that every medium or large sized JavaScript framework need this kind of DevTool; clearly it will never be an integrated one, because it’s framework-specific and we will get used to this new family of DevTool Add-ons from now on.

      List Ember View, Model Components and Routes

      The first use case is being able to immediately visualize Routes, Views/Components, Models and Controllers our EmberJS app instantiate for us, without too much webconsole acrobatics.

      So its immediately available (and evident) when we open its panel on an EmberJS Apps active in the current browser tab:

      Ember Inspector - ViewTree

      Using these tables we can then inspect all the properties (even computed ones) defined by us or inherited from the ember classes in the actual object hierarchy.

      Using an approach very similar to the Mozilla Remote Debugging Protocol from the integrated DevTools infrastructure (e.g. even when we use devtools locally, they exchange JSON messages over a pipe), the ember_debug component injected into the target tab sends the info it needs about the instantiated EmberJS objects to the devtool panel component, each identified by internally generated reference IDs (similar to the grips concept from the Mozilla Remote Debugging Protocol.

      Ember Extension - JSON messages

      Logging the exchanged messages, we can learn more about the protocol.

      Receive updates about EmberJS view tree info (EmberDebug -> DevtoolPanel):

      Request inspect object (DevtoolPanel -> EmberDebug):

      Receive updates about the requested Object info (DevtoolPanel -> EmberDebug):

      Reach every EmberJS object in the hierarchy from the webconsole

      A less evident but really useful feature is “sendToConsole”, to be able to reach any object/property that we can inspect from the webconsole, from the tables described above.

      When we click the >$E link, which is accessible in the right split panel:

      Ember Inspector - sendToConsole

      The ember devtool panel asks to ember_debug to put the defined object/property into a variable accessible globally in the target tab and named $E, then we can switch to the webconsole and interact with it freely:

      Ember Inspector - sendToConsole

      Request send object to console (DevtoolPanel -> EmberDebug):

      Much more

      These are only some of the feature already present in the Ember Inspector and more features are coming in its upcoming versions (e.g. log and inspect Ember Promises).

      If you already use EmberJS or if you are thinking about trying it, I suggest you to give Ember Inspector a try (on both Firefox or Chrome, if you prefer), it will turn inspecting your EmberJS webapp into a fast and easy task.

      Integrate XPI building into the grunt-based build process

      The last challenge in the road to a Firefox add-on fully integrated into the ember-extension build workflow was xpi building for an add-on based on the Mozilla Add-on SDK integrated into the grunt build process:

      Chrome crx extensions are simply ZIP files, as are Firefox XPI add-ons, but Firefox add-ons based on the Mozilla Add-on SDK needs to be built using the cfx tool from the Add-on SDK package.

      If we want more cross-browser add-ons, we have to help developers to build cross-browser extensions using the same approach used by ember-extension: a webapp built using grunt which will run into a browser add-on (which provides glue code specific to the various browsers supported).

      So I decided to move the grunt plugin that I’ve put together to integrate Add-on SDK common and custom tasks (e.g. download a defined Add-on SDK release, build an XPI, run cfx with custom parameters) into a separate project (and npm package), because it could help to make this task simpler and less annoying.

      Ember Extension: Firefox and Chrome Add-ons grunt build

      Build and run Ember Inspector Firefox Add-on using grunt:

      Following are some interesting fragments from grunt-mozilla-addon-sdk integration into ember-extension (which are briefly documented in the grunt-mozilla-addon-sdk repo README):

      Integrate grunt plugin into npm dependencies: package.json

      Define and use grunt shortcut tasks: Gruntfile.js

      Configure grunt-mozilla-addon-sdk tasks options

      Conclusion

      Especially thanks to the help from the EmberJS/EmberInspector community and its maintainers, Ember Inspector Firefox add-on is officially merged and integrated in the automated build process, so now we can use it on Firefox and Chrome to inspect our EmberJS apps!

      Stable:

      Latest Build

      In this article we’ve briefly dissected an interesting pattern to develop cross-browser devtools add-ons, and introduced a grunt plugin that simplifies integration of Add-on SDK tools into projects built using grunt: https://npmjs.org/package/grunt-mozilla-addon-sdk

      Thanks to the same web first approach Mozilla is pushing in the Apps domain creating cross-browser add-ons is definitely simpler than what we thought, and we all win :-)

      Happy Cross-Browser Extending,
      Luca

  6. The Side Projects of Mozillians: JSFiddle and Meatspac.es

    At Mozilla, we are happy to get the chance to work with a lot of talented people. Therefore, as an on-going series, we wanted to take the opportunity to highlight some of the exciting projects Mozillians work on in their spare time.

    JSFiddle

    JSFiddle is a tool to write web examples (in HTML, JavaScript and CSS) called ‘fiddles’. They can be saved and shared with others or embedded in a website which is perfect for blogs, documentation or tutorials. Created by Piotr Zalewa.

    JSFiddle

    Piotr: I wanted a tool that could help me check if my frontend code was working. I was active on the MooTools scene at the time and we needed a tool to support our users who had questions about the framework and specific bugs to solve. The community is the best motivation. There are about 2,000 developers creating and watching fiddles right now! Many big projects are using JSFiddle for docs (MooTools, HighCharts) or bug requests (jQuery).

    I’m always logged in on the #mootools IRC channel and one day we had a small competition to see who could be the first to answer support questions with only one line of JavaScript code. A user asked a non-trivial question which needed to be answered with both HTML and JavaScript. Our usual workflow was to write an HTML file, run it locally in the browser, copy the code to a Pastebin site then share the link. No one knew of a tool that could do this. The next day I had a prototype created in the evening and it was well accepted. The working but ugly version was completed shortly after. Oskar Krawczyk joined as a designer and the project was ready to be shown to the world.

    It started as Django and MySQL on the server side with MooTools as a frontend framework. Since then the only major change was adding Memcache. Currently we run JSFiddle on 12 servers sponsored by DigitalOcean. 2 database servers, 3 application servers, 2 Memcache, then static files and development servers. I would ideally like to have the database structured in a way that would be easier to scale. The database is huge and updating tables takes a lot of time.

    JSFiddle was designed in the time when most of the JavaScript libraries were running under one framework only. We want to allow users to mix frameworks and add more languages. At the moment you can write in HTML, JavaScript, Coffeescript, CSS and SCSS but I would like to support more languages. We’ve got a full hat of ideas to be implemented but I think it’s better to provide improvements than promises.

    Meatspac.es

    Meatspac.es is a single public channel chat app that generates animated GIFs of users from their camera once they submit a new message. Created by Jen Fong with GIF library support added by Sole Penadés.

    Meatspac.es

    Jen: I’ve been working on various quirky chat apps that involved some form of embedded media so this was an idea I had about getting users to interact beyond typing by posing for the camera and doing a little movement. I also really like GIFs and the fact that they work everywhere. I had been playing with WebRTC here and there and Sole was working on her RTCamera app when I thought: “Could we combine the two worlds? Chat and GIFs?”.

    For the web server I used Nginx which proxies to a long running Node process using Express. The messages and GIFs are stored temporarily in LevelDB with a TTL (time-to-live) that deletes the message, including the GIFs stored as Base64 blobs, after 10 minutes. On the client-side, it uses jQuery, some GIF library files and updates with WebSockets with an AJAX fallback.

    The biggest challenge of the project was surprisingly not code related! It was largely keeping up with all the craziness when a flood of people started using the chat, tweeting at me and contacting me. I first mentioned it publicly at ‘RealTimeConf’ in Portland a few weeks prior then started tweeting about it. After that a bunch of people checked it out, and someone posted it on Hacker News where even more people came (around 8,000 people on the heaviest day). It was mentioned on Twitter and various sources for a few days after.

    People can be really creative during their GIF creation. It was also interesting to watch people give each other humorous ‘-bro’ nicknames; both women and men. They would always ask others what their name should be rather than giving themselves a name.

    I am now working on a similar app but for one to many GIF chatting for Firefox OS called chatspaces. Anyone who is interested in contributing can watch the repository and check the README for what to contribute.

  7. Handling click-to-activate plugins using JavaScript

    From Firefox 26 onwards — and in the case of insecure Flash/Java in older Firefox versions — most plugins will not be automatically activated. We therefore can no longer plugins starting immediately after they have been inserted into the page. This article covers JavaScript techniques we can employ to handle plugins, making it less likely that affected sites will break.

    Using a script to determine if a plugin is installed

    To detect if a plugin is actually installed, we can query navigator.mimeTypes for the plugin MIME type we intend to use, to differentiate between plugins that are not installed and those that are click-to-activate. For example:

    function isJavaAvailable() {
        return 'application/x-java-applet' in navigator.mimeTypes;
    }

    Note: Do not iterate through navigator.mimeTypes or navigator.plugins, as enumeration may well be removed as a privacy measure in a future version of Firefox.

    Using a script callback to determine when a plugin is activated

    The next thing to be careful of is scripting plugins immediately after instances are created on the page, to avoid breakage due to the plugin not being properly loaded. The plugin should make a call into JavaScript after it is created, using NPRuntime scripting:

    function pluginCreated() {
        document.getElementById('myPlugin').callPluginMethod();
    }
    <object type="application/x-my-plugin" data="somedata.mytype" id="myPlugin">
      <param name="callback" value="pluginCreated()">
    </object>

    Note that the “callback” parameter (or something equivalent) must be implemented by your plugin. This can be done in Flash using the flash.external.ExternalInterface API, or in Java using the netscape.javascript package.

    Using properties on the plugin to determine when it activated

    When using a plugin that doesn’t allow us to specify callbacks and we can’t modify it, an alternative technique is to test for properties that the plugin should have, using code constructs like so:

    <p id="myNotification">Waiting for the plugin to activate!</p>
    <object id="myPlugin" type="application/x-my-plugin"></object>
    window.onload = function () {
        if (document.getElementById('myPlugin').myProperty !== undefined) {
            document.getElementById('myNotification').style.display = 'none';
            document.getElementById('myPlugin').callPluginMethod();
        } else {
            console.log("Plugin not activated yet.");
            setTimeout(checkPlugin, 500);
        }
    }

    Making plugins visible on the page

    When a site wants the user to enable a plugin, the primary indicator is that the plugin is visible on the page, for example:

    Screenshot of the silverlight plugin activation on the Netflix website.

    If a page creates a plugin that is very small or completely hidden, the only visual indication to the user is the small icon in the Firefox location bar. Even if the plugin element will eventually be hidden, pages should create the plugin element visible on the page, and then resize or hide it only after the user has activated the plugin. This can be done in a similar fashion to the callback technique we showed above:

    function pluginCreated() {
      // We don't need to see the plugin, so hide it by resizing
      var plugin = document.getElementById('myPlugin');
      plugin.height = 0;
      plugin.width = 0;
      plugin.callPluginMethod();
    }
    <!-- Give the plugin an initial size so it is visible -->
    <object type="application/x-my-plugin" data="somedata.mytype" id="myPlugin" width="300" height="300">
      <param name="callback" value="pluginCreated()">
    </object>

    Note: For more basic information on how plugins operate in Firefox, read Why do I have to click to activate plugins? on support.mozilla.org.

  8. Using JSFiddle to Prototype Firefox OS Apps

    Dancing to the Tune of the Fiddle

    JSFiddle is a fantastic prototyping and code review tool. It’s great for getting out a quick test case or code concept without having to spool up your full tool chain and editor. Further, it’s a great place to paste ill-behaved code so that others can review it and ideally help you get at the root of your problem.

    Now you’re able to not only prototype snippets of code, but Firefox OS apps as well. We’re very excited about this because for a while now we’ve been trying to make sure developers understand that creating a Firefox OS app is just like creating a web app. By tinkering with JSFiddle live in your browser, we think you’ll see just how easy it is and the parallels will be more evident.

    Fiddling a Firefox OS App: The Summary

    Here are the steps that you need to go through to tinker with Firefox OS apps using JSFiddle:

    1. Write your code as you might normally when making a JSFiddle
    2. Append /webapp.manifest to the URL of your Fiddle URL to and then paste this link into the Firefox OS simulator to install the app
    3. Alternatively, append /fxos.html to your Fiddle URL to get an install page like a typical Firefox OS hosted application

    I’ve created a demo JSFiddle here that we will go over in detail in the next section.

    Fiddling a Firefox OS App: In Detail

    Write Some Code

    Let’s start with a basic “Hello World!”, a familiar minimal implementation. Implement the following code in your Fiddle:

    HTML:

    <h1>Hello world!</h1>

    CSS

    h1 {
        color: #f00;
    }

    JavaScript

    alert(document.getElementsByTagName('h1')[0].innerHTML);

    Your Fiddle should resemble the following:

    Hello world Firefox OS JSFiddle

    Then, append /manifest.webapp to the end of your Fiddle URL. Using my demo Fiddle as an example, we end up with http://jsfiddle.net/afabbro/vrVAP/manifest.webapp

    Copy this URL to your clipboard. Depending on your browser behavior, it may or may not copy with ‘http://’ intact. Please note that the simulator will not accept any URLs where the protocol is not specified explicitly. So, if it’s not there – add it. The simulator will highlight this input box with a red border when the URL is invalid.

    If you try and access your manifest.webapp from your browser navigation bar, you should end up downloading a copy of the auto-generated manifest that you can peruse. For example, here is the manifest for my test app:

    {
      "version": "0",
      "name": "Hello World Example",
      "description": "jsFiddle example",
      "launch_path": "/afabbro/vrVAP/app.html",
      "icons": {
        "16": "/favicon.png",
        "128": "/img/jsf-circle.png"
      },
      "developer": {
        "name": "afabbro"
      },
      "installs_allowed_from": ["*"],
      "appcache_path": "http://fiddle.jshell.net/afabbro/vrVAP/cache.manifest",
      "default_locale": "en"
    }

    If you haven’t written a manifest for a Firefox OS app before, viewing this auto-generated one will give you an idea of what bits of information you need to provide for your app when you create your own from scratch later.

    Install the App in the Simulator

    Paste the URL that you copied into the field as shown below. As mentioned previously, the field will highlight red if there are any problems with your URL.

    How your URL should look

    After adding, the simulator should boot your app immediately.

    Alert with confirmation button

    You can see that after we dismiss the alert() that we are at a view (a basic HTML page in this case) with a single red h1 tag as we would expect.

    Our Hello World Page in the Simulator

    Install the App From a Firefox OS Device

    In the browser on your Firefox OS device or in the browser provided in the simulator, visit the URL of your Fiddle and append /fxos.html. Using the demo URL as an example again, we obtain: http://jsfiddle.net/afabbro/vrVAP/fxos.html

    Click install, and you should find the app on your home screen.

    Caveats

    This is still very much a new use of the JSFiddle tool, and as such there are still bugs and features we’re hoping to work out for the long term. For instance, at time of writing this article, the following caveats are true:

    1. You can only have one JSFiddle’d app installed in the simulator at a time
    2. There is no offline support

    Thanks

    This JSFiddle hack comes to us courtesy of Piotr Zalewa, who also happens to be working on making PhoneGap build for Firefox OS. Let us know what you think in the comments, and post a link to your Fiddle’s manifest if you make something interesting that you want to show off.

  9. So You Wanna Build a Crowdfunding Site?

    The tools to get funded by the crowd should belong to the crowd.

    That's why I want to show you how to roll your own crowdfunding site, in less than 300 lines of code. Everything in this tutorial is open source, and we'll only use other open-source technologies, such as Node.js, MongoDB, and Balanced Payments.

    Here's the Live Demo.
    All source code and tutorial text is Unlicensed.

    0. Quick Start

    If you just want the final crowdfunding site, clone the crowdfunding-tuts repository and go to the /demo folder.

    All you need to do is set your configuration variables, and you’re ready to go! For everyone who wants the nitty gritty details, carry on.

    1. Setting up a basic Node.js app with Express

    If you haven’t already done so, you’ll need to install Node.js. (duh)

    Create a new folder for your app. We’ll be using the Express.js framework to make things a lot more pleasant. To install the Express node module, run this on the command line inside your app’s folder:

    npm install express

    Next, create a file called app.js, which will be your main server logic. The following code will initialize a simple Express app,
    which just serves a basic homepage and funding page for your crowdfunding site.

    // Configuration
    var CAMPAIGN_GOAL = 1000; // Your fundraising goal, in dollars
     
    // Initialize an Express app
    var express = require('express');
    var app = express();
    app.use("/static", express.static(__dirname + '/static')); // Serve static files
    app.use(express.bodyParser()); // Can parse POST requests
    app.listen(1337); // The best port
    console.log("App running on http://localhost:1337");
     
    // Serve homepage
    app.get("/",function(request,response){
     
        // TODO: Actually get fundraising total
        response.send(
            "<link rel='stylesheet' type='text/css' href='/static/fancy.css'>"+
            "<h1>Your Crowdfunding Campaign</h1>"+
            "<h2>raised ??? out of $"+CAMPAIGN_GOAL.toFixed(2)+"</h2>"+
            "<a href='/fund'>Fund This</a>"
        );
     
    });
     
    // Serve funding page
    app.get("/fund",function(request,response){
        response.sendfile("fund.html");
    });

    Create another file named fund.html. This will be your funding page.

    <link rel='stylesheet' type='text/css' href='/static/fancy.css'>
    <h1>Donation Page:</h1>

    Optionally, you may also include a stylesheet at /static/fancy.css,
    so that your site doesn’t look Hella Nasty for the rest of this tutorial.

    @import url(https://fonts.googleapis.com/css?family=Raleway:200);
    body {
        margin: 100px;
        font-family: Raleway; /* Sexy font */
        font-weight: 200;
    }

    Finally, run node app on the command line to start your server!

    Check out your crowdfunding site so far at http://localhost:1337.

    Crowdfunding Homepage 1

    The homepage will display the Campaign Goal you set in the Configuration section of app.js. The donations page isn’t functional yet, so in the following chapters, I’ll show you how to accept and aggregate credit card payments from your wonderful backers.

    2. Getting started with Balanced Payments

    Balanced Payments isn’t just another payments processor. They’ve open sourced their whole site, their chat logs are publicly available, and they even discuss their roadmap in the open. These people get openness.

    Best of all, you don’t even need to sign up to get started with Balanced!

    Just go to this link, and they’ll generate a brand-new Test Marketplace for you,
    that you can claim with an account afterwards. Remember to keep this tab open, or save the URL, so you can come back to your Test Marketplace later.

    Balanced Test Marketplace

    Click the Settings tab in the sidebar, and note your Marketplace URI and API Key Secret.

    Balanced Settings

    Copy these variables to the Configuration section of app.js like this:

    // Configuration
    var BALANCED_MARKETPLACE_URI = "/v1/marketplaces/TEST-YourMarketplaceURI";
    var BALANCED_API_KEY = "YourAPIKey";
    var CAMPAIGN_GOAL = 1000; // Your fundraising goal, in dollars

    Now, let’s switch back to fund.html to create our actual payment page.

    First, we’ll include and initialize Balanced.js. This Javascript library will securely tokenize the user’s credit card info, so your server never has to handle the info directly. Meaning, you will be free from PCI regulations. Append the following code to fund.html, replacing BALANCED_MARKETPLACE_URI with your actual Marketplace URI:

    <!-- Remember to replace BALANCED_MARKETPLACE_URI with your actual Marketplace URI! -->
    <script src="https://js.balancedpayments.com/v1/balanced.js"></script>
    <script>
        var BALANCED_MARKETPLACE_URI = "/v1/marketplaces/TEST-YourMarketplaceURI";
        balanced.init(BALANCED_MARKETPLACE_URI);
    </script>

    Next, create the form itself, asking for the user’s Name, the Amount they want to donate, and other credit card info. We will also add a hidden input, for the credit card token that Balanced.js will give us. The form below comes with default values for a test Visa credit card. Append this to fund.html:

    <form id="payment_form" action="/pay/balanced" method="POST">
     
        Name: <input name="name" value="Pinkie Pie"/> <br />
        Amount: <input name="amount" value="12.34"/> <br />
        Card Number: <input name="card_number" value="4111 1111 1111 1111"/> <br />
        Expiration Month: <input name="expiration_month" value="4"/> <br />
        Expiration Year: <input name="expiration_year" value="2050"/> <br />
        Security Code: <input name="security_code" value="123"/> <br />
     
        <!-- Hidden inputs -->
        <input type="hidden" name="card_uri"/>
     
    </form>
    <button onclick="charge();">
        Pay with Credit Card
    </button>

    Notice the Pay button does not submit the form directly, but calls a charge() function instead, which we are going to implement next. The charge() function will get the credit card token from Balanced.js,
    add it as a hidden input, and submit the form. Append this to fund.html:

    <script>
     
    // Get card data from form.
    function getCardData(){
        // Actual form data
        var form = document.getElementById("payment_form");
        return {
            "name": form.name.value,
            "card_number": form.card_number.value,
            "expiration_month": form.expiration_month.value,
            "expiration_year": form.expiration_year.value,
            "security_code": form.security_code.value
        };
    }
     
    // Charge credit card
    function charge(){
     
        // Securely tokenize card data using Balanced
        var cardData = getCardData();
        balanced.card.create(cardData, function(response) {
     
            // Handle Errors (Anything that's not Success Code 201)
            if(response.status!=201){
                alert(response.error.description);
                return;
            }
     
            // Submit form with Card URI
            var form = document.getElementById("payment_form");
            form.card_uri.value = response.data.uri;
            form.submit();
     
        });
     
    };
     
    </script>

    This form will send a POST request to /pay/balanced, which we will handle in app.js. For now, we just want to display the card token URI. Paste the following code at the end of app.js:

    // Pay via Balanced
    app.post("/pay/balanced",function(request,response){
     
        // Payment Data
        var card_uri = request.body.card_uri;
        var amount = request.body.amount;
        var name = request.body.name;
     
        // Placeholder
        response.send("Your card URI is: "+request.body.card_uri);
     
    });

    Restart your app, (Ctrl-C to exit, then node app to start again) and go back to http://localhost:1337.

    Your payment form should now look like this:

    Funding Form 1

    The default values for the form will already work, so just go ahead and click Pay With Credit Card. (Make sure you’ve replaced BALANCED_MARKETPLACE_URI in fund.html with your actual Test Marketplace’s URI!) Your server will happily respond with the generated Card URI Token.

    Funding Form 2

    Next up, we will use this token to actually charge the given credit card!

    3. Charging cards through Balanced Payments

    Before we charge right into this, (haha) let’s install two more Node.js modules for convenience.

    Run the following in the command line:

    # A library for simplified HTTP requests.
        npm install request
    npm install q

    A Promises library, to pleasantly handle asynchronous calls and avoid Callback Hell.

    Because we’ll be making multiple calls to Balanced, let’s also create a helper method. The following function returns a Promise that the Balanced API has responded to whatever HTTP Request we just sent it. Append this code to app.js:

    // Calling the Balanced REST API
    var Q = require('q');
    var httpRequest = require('request');
    function _callBalanced(url,params){
     
        // Promise an HTTP POST Request
        var deferred = Q.defer();
        httpRequest.post({
     
            url: "https://api.balancedpayments.com"+BALANCED_MARKETPLACE_URI+url,
            auth: {
                user: BALANCED_API_KEY,
                pass: "",
                sendImmediately: true
            },
            json: params
     
        }, function(error,response,body){
     
            // Handle all Bad Requests (Error 4XX) or Internal Server Errors (Error 5XX)
            if(body.status_code>=400){
                deferred.reject(body.description);
                return;
            }
     
            // Successful Requests
            deferred.resolve(body);
     
        });
        return deferred.promise;
     
    }

    Now, instead of just showing us the Card Token URI when we submit the donation form, we want to:

    1. Create an account with the Card URI
    2. Charge said account for the given amount (note: you’ll have to convert to cents for the Balanced API)
    3. Record the transaction in the database (note: we’re skipping this for now, and covering it in the next chapter)
    4. Render a personalized message from the transaction

    Replace the app.post("/pay/balanced", ... ); callback from the previous chapter with this:

    // Pay via Balanced
    app.post("/pay/balanced",function(request,response){
     
        // Payment Data
        var card_uri = request.body.card_uri;
        var amount = request.body.amount;
        var name = request.body.name;
     
        // TODO: Charge card using Balanced API
        /*response.send("Your card URI is: "+request.body.card_uri);*/
     
        Q.fcall(function(){
     
            // Create an account with the Card URI
            return _callBalanced("/accounts",{
                card_uri: card_uri
            });
     
        }).then(function(account){
     
            // Charge said account for the given amount
            return _callBalanced("/debits",{
                account_uri: account.uri,
                amount: Math.round(amount*100) // Convert from dollars to cents, as integer
            });
     
        }).then(function(transaction){
     
            // Donation data
            var donation = {
                name: name,
                amount: transaction.amount/100, // Convert back from cents to dollars.
                transaction: transaction
            };
     
            // TODO: Actually record the transaction in the database
            return Q.fcall(function(){
                return donation;
            });
     
        }).then(function(donation){
     
            // Personalized Thank You Page
            response.send(
                "<link rel='stylesheet' type='text/css' href='/static/fancy.css'>"+
                "<h1>Thank you, "+donation.name+"!</h1> <br />"+
                "<h2>You donated $"+donation.amount.toFixed(2)+".</h2> <br />"+
                "<a href='/'>Return to Campaign Page</a> <br />"+
                "<br />"+
                "Here's your full Donation Info: <br />"+
                "<pre>"+JSON.stringify(donation,null,4)+"</pre>"
            );
     
        },function(err){
            response.send("Error: "+err);
        });
     
    });

    Now restart your app, and pay through the Donation Page once again. (Note: To cover processing fees, you have to pay more than $0.50 USD) This time, you’ll get a full Payment Complete page, with personalized information!

    Transaction 1

    Furthermore, if you check the transactions tab in your Test Marketplace dashboard, you should find that money has now been added to your balance.

    Transaction 2

    We’re getting close! Next, let’s record donations in a MongoDB database.

    4. Recording donations with MongoDB

    MongoDB is a popular open-source NoSQL database. NoSQL is especially handy for rapid prototyping, because of its dynamic schemas. In other words, you can just make stuff up on the fly.

    This will be useful if, in the future, you want to record extra details about each donation, such as the donator’s email address, reward levels, favorite color, etc.

    Start up a MongoDB database, and get its URI. You can use a remote database with a service such as MongoHQ, but for this tutorial, let’s run MongoDB locally (instructions for installing and running MongoDB on your computer).

    Once you’ve done that, add the MongoDB URI to your Configuration section at the top of app.js.

    // Configuration
    var MONGO_URI = "mongodb://localhost:27017/test";
    var BALANCED_MARKETPLACE_URI = "/v1/marketplaces/TEST-YourMarketplaceURI";
    var BALANCED_API_KEY = "YourAPIKey";
    var CAMPAIGN_GOAL = 1000; // Your fundraising goal, in dollars

    Now, let’s install the native MongoDB driver for Node.js:

    npm install mongodb

    Add the following code to the end of app.js. This will return a Promise that we’ve recorded a donation in MongoDB.

    // Recording a Donation
    var mongo = require('mongodb').MongoClient;
    function _recordDonation(donation){
     
        // Promise saving to database
        var deferred = Q.defer();
        mongo.connect(MONGO_URI,function(err,db){
            if(err){ return deferred.reject(err); }
     
            // Insert donation
            db.collection('donations').insert(donation,function(err){
                if(err){ return deferred.reject(err); }
     
                // Promise the donation you just saved
                deferred.resolve(donation);
     
                // Close database
                db.close();
     
            });
        });
        return deferred.promise;
     
    }

    Previously, we skipped over actually recording a donation to a database.
    Go back, and replace that section of code with this:

    // TODO: Actually log the donation with MongoDB
    /*return Q.fcall(function(){
        return donation;
    });*/
     
    // Record donation to database
    return _recordDonation(donation);

    Restart your app, and make another donation. If you run db.donations.find() on your MongoDB instance, you’ll find the donation you just logged!

    Transaction 3

    Just one step left…

    Finally, we will use these recorded donations to calculate how much money we’ve raised.

    5. Completing the Donation

    Whether it’s showing progress or showing off, you’ll want to tell potential backers how much your campaign’s already raised.

    To get the total amount donated, simply query for all donation amounts from MongoDB, and add them up. Here’s how you do that with MongoDB, with an asynchronous Promise for it. Append this code to app.js:

    // Get total donation funds
    function _getTotalFunds(){
     
        // Promise the result from database
        var deferred = Q.defer();
        mongo.connect(MONGO_URI,function(err,db){
            if(err){ return deferred.reject(err); }
     
            // Get amounts of all donations
            db.collection('donations')
            .find( {}, {amount:1} ) // Select all, only return "amount" field
            .toArray(function(err,donations){
                if(err){ return deferred.reject(err); }
     
                // Sum up total amount, and resolve promise.
                var total = donations.reduce(function(previousValue,currentValue){
                    return previousValue + currentValue.amount;
                },0);
                deferred.resolve(total);
     
                // Close database
                db.close();
     
            });
        });
        return deferred.promise;
     
    }

    Now, let’s go back to where we were serving a basic homepage. Let’s change that, to actually calculate your total funds, and show the world how far along your campaign has gotten.

    // Serve homepage
    app.get("/",function(request,response){
     
        // TODO: Actually get fundraising total
        /*response.send(
            "<link rel='stylesheet' type='text/css' href='/static/fancy.css'>"+
            "<h1>Your Crowdfunding Campaign</h1>"+
            "<h2>raised ??? out of $"+CAMPAIGN_GOAL.toFixed(2)+"</h2>"+
            "<a href='/fund'>Fund This</a>"
        );*/
     
        Q.fcall(_getTotalFunds).then(function(total){
            response.send(
                "<link rel='stylesheet' type='text/css' href='/static/fancy.css'>"+
                "<h1>Your Crowdfunding Campaign</h1>"+
                "<h2>raised $"+total.toFixed(2)+" out of $"+CAMPAIGN_GOAL.toFixed(2)+"</h2>"+
                "<a href='/fund'>Fund This</a>"
            );
        });
     
    });

    Restart the app, and look at your final homepage.

    Crowdfunding Homepage 2

    It’s… beautiful.

    You’ll see that your total already includes the donations recorded from the previous chapter. Make another payment through the Donations Page, and watch your funding total go up.

    Congratulations, you just made your very own crowdfunding site!

    – – –

    Discuss this on Hacker News

  10. Content Security Policy 1.0 lands in Firefox Aurora

    The information in this article is based on work together with Ian Melven, Kailas Patil and Tanvi Vyas.

    We have just landed support for the Content Security Policy (CSP) 1.0
    specification
    in Firefox Aurora (Firefox 23), available as of tomorrow (May 30th). CSP is a security mechanism that aims to protect a website against content injection attacks by providing a whitelist of known-good domain names to accept JavaScript (and other content) from. CSP does this by sending a Content-Security-Policy header with the document it protects (yes, we lost the X prefix with the 1.0 version of the spec).

    To effectively protect against XSS, a few JavaScript features have to be
    disabled:

    • All inline JavaScript is disallowed. This means, that all the JavaScript code must be placed in a separate file that is linked via <script src=... >
    • All calls to functions which allow JavaScript code being executed from strings (e.g., eval) are disabled

    CSP now more intuitive and consistent

    While Firefox has had support for CSP since its invention here at Mozilla, things have been changing a lot. The streamlined development of a specification within the W3C has made the concept more intuitive and consistent. Most directives in a CSP header are now of a unified form which explicitly specifies the type of content you want to restrict:

    • img-src
    • object-src
    • script-src
    • style-src and so on.

    Oh and if you feel like you must allow less secure JavaScript coding styles, you can add the values unsafe-inline or unsafe-eval to your list of script sources. (This used to be inline-script and eval-script before).

    Start protecting your website by implementing CSP now!

    But wait – isn’t that a bit tedious… Writing a complex policy and making sure that you remembered all the resources that your website requires? Don’t fret! Here comes UserCSP again!

    Generate your Content Security Policies with UserCSP!

    During the last few months, Kailas Patil, a student in our Security Mentorship Program has continued his GSoC work from last year to update UserCSP.

    UserCSP is a Firefox add-on that helps web developers and security-minded users use CSP. Web developers can create a Content Security Policy (CSP) for their site by using UserCSP’s infer CSP feature. This feature can list required resource URLs and turn them into a policy ready to plug into a CSP header.

    In addition, UserCSP is the first step to expose a policy enforcement mechanism directly to web users. Furthermore, users can enforce a stricter policy than a page supplies through the add-on or apply a policy to certain websites that don’t currently support CSP.

    While earlier versions of UserCSP were more aligned to content security policies as originally invented at Mozilla, this version is updated to be in compliance with the CSP 1.0 specification. This means that policies derived with this add-on may work in all browsers as soon as they support the specification. Hooray!

    As this evolves and ships, our MDN documentation on Content Security Policy (CSP) will keep on evolving, and we also plan to write more about this in the Mozilla Security Blog in the next few weeks, so stay tuned!