Mozilla

Articles by Robert Nyman [Editor]

Sort by:

View:

  1. The Making of the Time Out Firefox OS app

    A rash start into adventure

    So we told our client that yes, of course, we would do their Firefox OS app. We didn’t know much about FFOS at the time. But, hey, we had just completed refactoring their native iOS and Android apps. Web applications were our core business all along. So what was to be feared?

    More than we thought, it turned out. Some of the dragons along the way we fought and defeated ourselves. At times we feared that we wouldn’t be able to rescue the princess in time (i.e. before MWC 2013). But whenever we got really lost in detail forest, the brave knights from Mozilla came to our rescue. In the end, it all turned out well and the team lived happily ever after.

    But here’s the full story:

    Mission & challenge

    Just like their iOS and Android apps, Time Out‘s new Firefox OS app was supposed to allow browsing their rich content on bars, restaurants, things to do and more by category, area, proximity or keyword search, patient zero being Barcelona. We would need to show results as illustrated lists as well as visually on a map and have a decent detail view, complete with ratings, access details, phone button and social tools.

    But most importantly, and in addition to what the native apps did, this app was supposed to do all of that even when offline.

    Oh, and there needed to be a presentable, working prototype in four weeks time.

    Cross-platform reusability of the code as a mobile website or as the base of HTML5 apps on other mobile platforms was clearly prio 2 but still to be kept in mind.

    The princess was clearly in danger. So we arrested everyone on the floor that could possibly be of help and locked them into a room to get the basics sorted out. It quickly emerged that the main architectural challenges were that

    • we had a lot of things to store on the phone, including the app itself, a full street-level map of Barcelona, and Time Out’s information on every venue in town (text, images, position & meta info),
    • at least some of this would need to be loaded from within the app; once initially and synchronizable later,
    • the app would need to remain interactively usable during these potentially lengthy downloads, so they’d need to be asynchronous,
    • whenever the browser location changed, this would be interrupted

    In effect, all the different functionalities would have to live within one single HTML document.

    One document plus hash tags

    For dynamically rendering, changing and moving content around as required in a one-page-does-all scenario, JavaScript alone didn’t seem like a wise choice. We’d been warned that Firefox OS was going to roll out on a mix of devices including the very low cost class, so it was clear that fancy transitions of entire full-screen contents couldn’t be orchestrated through JS loops if they were to happen smoothly.

    On the plus side, there was no need for JS-based presentation mechanics. With Firefox OS not bringing any graveyard of half-dead legacy versions to cater to, we could (finally!) rely on HTML5 and CSS3 alone and without fallbacks. Even beyond FFOS, the quick update cycles in the mobile environment didn’t seem to block the path for taking a pure CSS3 approach further to more platforms later.

    That much being clear, which better place to look for best practice examples than Mozilla Hacks? After some digging, Thomas found Hacking Firefox OS in which Luca Greco describes the use of fragment identifiers (aka hashtags) appended to the URL to switch and transition content via CSS alone, which we happily adopted.

    Another valuable source of ideas was a list of GAIA building blocks on Mozilla’s website, which has since been replaced by the even more useful Building Firefox OS site.

    In effect, we ended up thinking in terms of screens. Each physically a <div>, whose visibility and transitions are governed by :target CSS selectors that draw on the browser location’s hashtag. Luckily, there’s also the onHashChange event that we could additionally listen to in order to handle the app-level aspects of such screen changes in JavaScript.

    Our main HTML and CSS structure hence looked like this:

    And a menu

    We modeled the drawer menu very similarily, just that it sits in a <nav> element on the same level as the <section> container holding all the screens. Its activation and deactivation works by catching the menu icon clicks, then actively changing the screen container’s data-state attribute from JS, which triggers the corresponding CSS3 slide-in / slide-out transition (of the screen container, revealing the menu beneath).

    This served as our “Hello, World!” test for CSS3-based UI performance on low-end devices, plus as a test case for combining presentation-level CSS3 automation with app-level explicit status handling. We took down a “yes” for both.

    UI

    By the time we had put together a dummy around these concepts, the first design mockups from Time Out came in so that we could start to implement the front end and think about connecting it to the data sources.

    For presentation, we tried hard to keep the HTML and CSS to the absolute minimum. Mozilla’s GAIA examples being a very valuable source of ideas once more.

    Again, targeting Firefox OS alone allowed us to break free of the backwards compatibility hell that we were still living in, desktop-wise. No one would ask us Will it display well in IE8? or worse things. We could finally use real <section>, <nav>, <header>, and <menu> tags instead of an army of different classes of <div>. What a relief!

    The clear, rectangular, flat and minimalistic design we got from Time Out also did its part to keep the UI HTML simple and clean. After we were done with creating and styling the UI for 15 screens, our HTML had only ~250 lines. We later improved that to 150 while extending the functionality, but that’s a different story.

    Speaking of styling, not everything that had looked good on desktop Firefox even in its responsive design view displayed equally well on actual mobile devices. Some things that we fought with and won:

    Scale: The app looked quite different when viewed on the reference device (a TurkCell branded ZTE device that Mozilla had sent us for testing) and on our brand new Nexus 4s:

    After a lot of experimenting, tearing some hair and looking around how others had addressed graceful, proportional scaling for a consistent look & feel across resolutions, we stumbled upon this magic incantation:

    <meta name="viewport" content="user-scalable=no, initial-scale=1,
    maximum-scale=1, width=device-width" />

    What it does, to quote an article at Opera, is to tell the browser that there is “No scaling needed, thank you very much. Just make the viewport as many pixels wide as the device screen width”. It also prevents accidental scaling while the map is zoomed. There is more information on the topic at MDN.

    Then there are things that necessarily get pixelated when scaled up to high resolutions, such as the API based venue images. Not a lot we could do about that. But we could at least make the icons and logo in the app’s chrome look nice in any resolution by transforming them to SVG.

    Another issue on mobile devices was that users have to touch the content in order to scroll it, so we wanted to prevent the automatic highlighting that comes with that:

    li, a, span, button, div
    {
        outline:none;
        -moz-tap-highlight-color: transparent;
        -moz-user-select: none;
        -moz-user-focus:ignore
    }

    We’ve since been warned that suppressing the default highlighting can be an issue in terms of accessibility, so you might wanted to consider this carefully.

    Connecting to the live data sources

    So now we had the app’s presentational base structure and the UI HTML / CSS in place. It all looked nice with dummy data, but it was still dead.

    Trouble with bringing it to life was that Time Out was in the middle of a big project to replace its legacy API with a modern Graffiti based service and thus had little bandwidth for catering to our project’s specific needs. The new scheme was still prototypical and quickly evolving, so we couldn’t build against it.

    The legacy construct already comprised a proxy that wrapped the raw API into something more suitable for consumption by their iOS and Android apps, but after close examination we found that we better re-re-wrap that on the fly in PHP for a couple of purposes:

    • Adding CORS support to avoid XSS issues, with the API and the app living in different subdomains of timeout.com,
    • stripping API output down to what the FFOS app really needed, which we could see would reduce bandwidth and increase speed by magnitude,
    • laying the foundation for harvesting of API based data for offline use, which we already knew we’d need to do later

    As an alternative to server-side CORS support, one could also think of using the SystemXHR API. It is a mighty and potentially dangerous tool however. We also wanted to avoid any needless dependency on FFOS-only APIs.

    So while the approach wasn’t exactly future proof, it helped us a lot to get to results quickly, because the endpoints that the app was calling were entirely of our own choice and making, so that we could adapt them as needed without time loss in communication.

    Populating content elements

    For all things dynamic and API-driven, we used the same approach at making it visible in the app:

    • Have a simple, minimalistic, empty, hidden, singleton HTML template,
    • clone that template (N-fold for repeated elements),
    • ID and fill the clone(s) with API based content.
    • For super simple elements, such as <li>s, save the cloning and whip up the HTML on the fly while filling.

    As an example, let’s consider the filters for finding venues. Cuisine is a suitable filter for restaurants, but certainly not for museums. Same is true for filter values. There are vegetarian restaurants in Barcelona, but certainly no vegetarian bars. So the filter names and lists of possible values need to be asked of the API after the venue type is selected.

    In the UI, the collapsible category filter for bars & pubs looks like this:

    The template for one filter is a direct child of the one and only

    <div id="templateContainer">

    which serves as our central template repository for everything cloned and filled at runtime and whose only interesting property is being invisible. Inside it, the template for search filters is:

    <div id="filterBoxTemplate">
      <span></span>
      <ul></ul>
    </div>

    So for each filter that we get for any given category, all we had to do was to clone, label, and then fill this template:

    $('#filterBoxTemplate').clone().attr('id', filterItem.id).appendTo(
    '#categoryResultScreen .filter-container');
    ...
    $("#" + filterItem.id).children('.filter-button').html(
    filterItem.name);

    As you certainly guessed, we then had to to call the API once again for each filter in order to learn about its possible values, which were then rendered into <li> elements within the filter‘s <ul> on the fly:

    $("#" + filterId).children('.filter_options').html(
    '<li><span>Loading ...</span></li>');
    
    apiClient.call(filterItem.api_method, function (filterOptions)
    {
      ...
      $.each(filterOptions, function(key, option)
      {
        var entry = $('<li filterId="' + option.id + '"><span>'
          + option.name + '</span></li>');
    
        if (selectedOptionId && selectedOptionId == filterOptionId)
        {
          entry.addClass('filter-selected');
        }
    
        $("#" + filterId).children('.filter_options').append(entry);
      });
    ...
    });

    DOM based caching

    To save bandwidth and increase responsiveness in on-line use, we took this simple approach a little further and consciously stored more application level information in the DOM than needed for the current display if that information was likely needed in the next step. This way, we’d have easy and quick local access to it without calling – and waiting for – the API again.

    The technical way we did so was a funny hack. Let’s look at the transition from the search result list to the venue detail view to illustrate:

    As for the filters above, the screen class for the detailView has an init() method that populates the DOM structure based on API input as encapsulated on the application level. The trick now is, while rendering the search result list, to register anonymous click handlers for each of its rows, which – JavaScript passing magic – contain a copy of, rather than a reference to, the venue objects used to render the rows themselves:

    renderItems: function (itemArray)
    {
      ...
    
      $.each(itemArray, function(key, itemData)
      {        
        var item = screen.dom.resultRowTemplate.clone().attr('id', 
          itemData.uid).addClass('venueinfo').click(function()
        {
          $('#mapScreen').hide();
          screen.showDetails(itemData);
        });
    
        $('.result-name', item).text(itemData.name);
        $('.result-type-label', item).text(itemData.section);
        $('.result-type', item).text(itemData.subSection);
    
        ...
    
        listContainer.append(item);
      });
    },
    
    ...
    
    showDetails: function (venue)
    {
      require(['screen/detailView'], function (detailView)
      {
        detailView.init(venue);
      });
    },

    In effect, there’s a copy of the data for rendering each venue’s detail view stored in the DOM. But neither in hidden elements nor in custom attributes of the node object, but rather conveniently in each of the anonymous pass-by-value-based click event handlers for the result list rows, with the added benefit that they don’t need to be explicitly read again but actively feed themselves into the venue details screen as soon a row receives a touch event.

    And dummy feeds

    Finishing the app before MWC 2013 was pretty much a race against time, both for us and for Time Out’s API folks, who had an entirely different and equally – if not more so – sportive thing to do. Therefore they had very limited time for adding to the (legacy) API that we were building against. For one data feed, this meant that we had to resort to including static JSON files into the app’s manifest and distribution; then use relative, self-referencing URLs as fake API endpoints. The illustrated list of top venues on the app’s main screen was driven this way.

    Not exactly nice, but much better than throwing static content into the HTML! Also, it kept the display code already fit for switching to the dynamic data source that eventually materialized later, and compatible with our offline data caching strategy.

    As the lack of live data on top venues then extended right to their teaser images, we made the latter physically part of the JSON dummy feed. In Base64 :) But even the low-end reference device did a graceful job of handling this huge load of ASCII garbage.

    State preservation

    We had a whopping 5M of local storage to spam, and different plans already (as well as much higher needs) for storing the map and application data for offline use. So what to do with this liberal and easily accessed storage location? We thought we could at least preserve the current application state here, so you’d find the app exactly as you left it when you returned to it.

    Map

    A city guide is the very showcase of an app that’s not only geo aware but geo centric. Maps fit for quick rendering and interaction in both online and offline use were naturally a paramount requirement.

    After looking around what was available, we decided to go with Leaflet, a free, easy to integrate, mobile friendly JavaScript library. It proved to be really flexible with respect to both behaviour and map sources.

    With its support for pinching, panning and graceful touch handling plus a clean and easy API, Leaflet made us arrive at a well-usable, decent-looking map with moderate effort and little pain:

    For a different project, we later rendered the OSM vector data for most of Europe into terabytes of PNG tiles in cloud storage using on-demand cloud power. Which we’d recommend as an approach if there’s a good reason not to rely on 3rd party hosted apps, as long as you don’t try this at home; Moving the tiles may well be slower and more costly than their generation.

    But as time was tight before the initial release of this app, we just – legally and cautiously(!) – scraped ready-to use OSM tiles off MapQuest.com.

    The packaging of the tiles for offline use was rather easy for Barcelona because about 1000 map tiles are sufficient to cover the whole city area up to street level (zoom level 16). So we could add each tile as a single line into the manifest.appache file. The resulting, fully automatic, browser-based download on first use was only 10M.

    This left us with a lot of lines like

    /mobile/maps/barcelona/15/16575/12234.png
    /mobile/maps/barcelona/15/16575/12235.png
    ...

    in the manifest and wishing for a $GENERATE clause as for DNS zone files.

    As convenient as it may seem to throw all your offline dependencies’ locations into a single file and just expect them to be available as a consequence, there are significant drawbacks to this approach. The article Application Cache is a Douchebag by Jake Archibald summarizes them and some help is given at Html5Rocks by Eric Bidleman.

    We found at the time that the degree of control over the current download state, and the process of resuming the app cache load in case that the initial time users spent in our app didn’t suffice for that to complete was rather tiresome.

    For Barcelona, we resorted to marking the cache state as dirty in Local Storage and clearing that flag only after we received the updateready event of the window.applicationCache object but in the later generalization to more cities, we moved the map away from the app cache altogether.

    Offline storage

    The first step towards offline-readiness was obviously to know if the device was online or offline, so we’d be able to switch the data source between live and local.

    This sounds easier than it was. Even with cross-platform considerations aside, neither the online state property (window.navigator.onLine), the events fired on the <body> element for state changes (“online” and “offline”, again on the <body>), nor the navigator.connection object that was supposed to have the on/offline state plus bandwidth and more, really turned out reliable enough.

    Standardization is still ongoing around all of the above, and some implementations are labeled as experimental for a good reason :)

    We ultimately ended up writing a NetworkStateService class that uses all of the above as hints, but ultimately and very pragmatically convinces itself with regular HEAD requests to a known live URL that no event went missing and the state is correct.

    That settled, we still needed to make the app work in offline mode. In terms of storage opportunities, we were looking at:

    Storage Capacity Updates Access Typical use
    App / app cache, i.e. everything listed in the file that the value of appcache_path in the app‘s webapp.manifest points to, and which is and therefore downloaded onto the device when the app is installed. <= 50M. On other platforms (e.g. iOS/Safari), user interaction required from 10M+. Recommendation from Moziila was to stay <2M. Hard. Requires user interaction / consent, and only wholesale update of entire app possible. By (relative) path HTML, JS, CSS, static assets such as UI icons
    LocalStorage 5M on UTF8-platforms such as FFOS, 2.5M in UTF16, e.g. on Chrome. Details here Anytime from app By name Key-value storage of app status, user input, or entire data of modest apps
    Device Storage (often SD card) Limited only by hardware Anytime from app (unless mounted as UDB drive when cionnected to desktop computer) By path, through Device Storage API Big things
    FileSystem API Bad idea
    Database Unlimited on FFOS. Mileage on other platforms varies Anytime from app Quick and by arbitrary properties Databases :)

    Some aspects of where to store the data for offline operation were decided upon easily, others not so much:

    • the app, i.e. the HTML, JS, CSS, and UI images would go into the app cache
    • state would be maintained in Local Storage
    • map tiles again in the app cache. Which was a rather dumb decision, as we learned later. Barcelona up to zoom level 16 was 10M, but later cities were different. London was >200M and even reduced to max. zoom 15 still worth 61M. So we moved that to Device Storage and added an actively managed download process for later releases.
    • The venue information, i.e. all the names, locations, images, reviews, details, showtimes etc. of the places that Time Out shows in Barcelona. Seeing that we needed lots of space, efficient and arbitrary access plus dynamic updates, this had to to go into the Database. But how?

    The state of affairs across the different mobile HTML5 platforms was confusing at best, with Firefox OS already supporting IndexedDB, but Safari and Chrome (considering earlier versions up to Android 2.x) still relying on a swamp of similar but different sqlite / WebSQL variations.

    So we cried for help and received it, as always when we had reached out to the Mozilla team. This time in the form of a pointer to pouchDB, a JS-based DB layer that at the same time wraps away the different native DB storage engines behind a CouchDB-like interface and adds super easy on-demand synchronization to a remote CouchDB-hosted master DB out there.

    Back last year it still was in pre-alpha state but very usable already. There were some drawbacks, such as the need for adding a shim for WebSql based platforms. Which in turn meant we couldn’t rely on storage being 8 bit clean, so that we had to base64 our binaries, most of all the venue images. Not exactly pouchDB’s fault, but still blowing up the size.

    Harvesting

    The DB platform being chosen, we next had to think how we’d harvest all the venue data from Time Out’s API into the DB. There were a couple of endpoints at our disposal. The most promising for this task was proximity search with no category or other restrictions applied, as we thought it would let us harvest a given city square by square.

    Trouble with distance metrics however being that they produce circles rather than squares. So step 1 of our thinking would miss venues in the corners of our theoretical grid

    while extending the radius to (half the) the grid’s diagonal, would produce redundant hits and necessitate deduplication.

    In the end, we simply searched by proximity to a city center location, paginating through the result indefinitely, so that we could be sure to to encounter every venue, and only once:

    Technically, we built the harvester in PHP as an extension to the CORS-enabled, result-reducing API proxy for live operation that was already in place. It fed the venue information in to the master CouchDB co-hosted there.

    Time left before MWC 2013 getting tight, we didn’t spend much time on a sophisticated data organization and just pushed the venue information into the DB as one table per category, one row per venue, indexed by location.

    This allowed us to support category based and area / proximity based (map and list) browsing. We developed an idea how offline keyword search might be made possible, but it never came to that. So the app simply removes the search icon when it goes offline, and puts it back when it has live connectivity again.

    Overall, the app now

    • supported live operation out of box,
    • checked its synchronization state to the remote master DB on startup,
    • asked, if needed, permission to make the big (initial or update) download,
    • supported all use cases but keyword search when offline.

    The involved components and their interactions are summarized in this diagram:

    Organizing vs. Optimizing the code

    For the development of the app, we maintained the code in a well-structured and extensive source tree, with e.g. each JavaScript class residing in a file of its own. Part of the source tree is shown below:

    This was, however, not ideal for deployment of the app, especially as a hosted Firefox OS app or mobile web site, where download would be the faster, the fewer and smaller files we had.

    Here, Require.js came to our rescue.

    It provides a very elegant way of smart and asynchronous requirement handling (AMD), but more importantly for our purpose, comes with an optimizer that minifies and combines the JS and CSS source into one file each:

    To enable asynchronous dependency management, modules and their requirements must be made known to the AMD API through declarations, essentially of a function that returns the constructor for the class you’re defining.

    Applied to the search result screen of our application, this looks like this:

    define
    (
      // new class being definied
      'screensSearchResultScreen',
    
      // its dependencies
      ['screens/abstractResultScreen', 'app/applicationController'],
    
      // its anonymous constructor
      function (AbstractResultScreen, ApplicationController)
      {
        var SearchResultScreen = $.extend(true, {}, AbstractResultScreen,
        {
          // properties and methods
          dom:
          {
            resultRowTemplate: $('#searchResultRowTemplate'),
            list: $('#search-result-screen-inner-list'),
            ...
          }
          ...
        }
        ...
    
        return SearchResultScreen;
      }
    );

    For executing the optimization step in the build & deployment process, we used Rhino, Mozilla’s Java-based JavaScript engine:

    java -classpath ./lib/js.jar:./lib/compiler.jar   
      org.mozilla.javascript.tools.shell.Main ./lib/r.js -o /tmp/timeout-webapp/
      $1_config.js

    CSS bundling and minification is supported, too, and requires just another call with a different config.

    Outcome

    Four weeks had been a very tight timeline to start with, and we had completely underestimated the intricacies of taking HTML5 to a mobile and offline-enabled context, and wrapping up the result as a Marketplace-ready Firefox OS app.

    Debugging capabilities in Firefox OS, especially on the devices themselves, were still at an early stage (compared to clicking about:app-manager today). So the lights in our Cologne office remained lit until pretty late then.

    Having built the app with a clear separation between functionality and presentation also turned out a wise choice when a week before T0 new mock-ups for most of the front end came in :)

    But it was great and exciting fun, we learned a lot in the process, and ended up with some very useful shiny new tools in our box. Often based on pointers from the super helpful team at Mozilla.

    Truth be told, we had started into the project with mixed expectations as to how close to the native app experience we could get. We came back fully convinced and eager for more.

    In the end, we made the deadline and as a fellow hacker you can probably imagine our relief. The app finally even received its 70 seconds of fame, when Jay Sullivan shortly demoed it at Mozilla’s MWC 2013 press conference as a showcase for HTML5′s and Firefox OS’s offline readiness (Time Out piece at 7:50). We were so proud!

    If you want to play with it, you can find the app in the marketplace or go ahead try it online (no offline mode then).

    Since then, the Time Out Firefox OS app has continued to evolve, and we as a team have used the chance to continue to play with and build apps for FFOS. To some degree, the reusable part of this has become a framework in the meantime, but that’s a story for another day..

    We’d like to thank everyone who helped us along the way, especially Taylor Wescoatt, Sophie Lewis and Dave Cook from Time Out, Desigan Chinniah and Harald Kirschner from Mozilla, who were always there when we needed help, and of course Robert Nyman, who patiently coached us through writing this up.

  2. Ember.JS – What it is and why we need to care about it

    This is a guest post by Sourav Lahoti and his thoughts about Ember.js

    Developers increasingly turn to client-side frameworks to simplify development, and there’s a big need for good ones in this area. We see a lot of players in this field, but for lots of functionality and moving parts, very few stand out in particular — Ember.js is one of them.

    So what is Ember.js? Ember.js is a MVC (Model–View–Controller) JavaScript framework which is maintained by the Ember Core Team (including Tom Dale, Yehuda Katz, and others). It helps developers create ambitious single-page web applications that don’t sacrifice what makes the web great: URI semantics, RESTful architecture, and the write-once, run-anywhere trio of HTML, CSS, and JavaScript.

    Why do we need to care

    Ember.js is tightly coupled with the technologies that make up the web today. It doesn’t attempt to abstract that away. Ember.js brings a clean and consistent application development model. If one needs to migrate from HTML to any other technology, Ember.js framework will evolve along with the current trends in web front end technology.

    It makes it very easy to create your own “component” and “template views” that are easy to understand, create and update. Coupled with its consistent way of managing bindings and computed properties, Ember.js does indeed offer much of the boilerplate code that a web framework needs.

    The core concept

    There are some nominal terms that you will find very common when you use ember.js and they form the basics of Ember.js:

    Routes
    A Route object basically represents the state of the application and corresponds to a url.
    Models
    Every route has an associated Model object, containing the data associated with the current state of the application.
    Controllers
    Controllers are used to decorate models with display logic.

    A controller typically inherits from ObjectController if the template is associated with a single model record, or an ArrayController if the template is associated with a list of records.

    Views
    Views are used to add sophisticated handling of user events to templates or to add reusable behavior to a template.
    Components
    Components are a specialized view for creating custom elements that can be easily reused in templates.

    Hands-on with Ember.js

    Data Binding:

    <script type=”text/x-handlebars”>
      <p>
        <label>Insert your name:</label>
        {{input type=”text” value=name}}
      </p>
     
      <p><strong>Echo: {{name}}</strong></p>
    </script>
    App = Ember.Application.create();

    Final result when the user interacts with the web app

    Ember.js does support data binding as we can see in the above example. What we type into the input is bound to name, as is the text after Echo: . When you change the text in one place, it automatically updates everywhere.

    But how does this happen? Ember.js uses Handlebars for two-way data binding. Templates written in handlebars get and set data from their controller. Every time we type something in our input, the name property of our controller is updated. Then, automatically, the template is updated because the bound data changed.

    A simple Visiting card demo using Handlebars

    We can create our own elements by using Handlebars.

    HTML

    <script type="text/x-handlebars">
     
      {{v-card myname=name street-address=address locality=city zip=zipCode email=email}}
     
      <h2 class="subheader">Enter Your information:</h2>
     
      <label>Enter Your Name:</label>
      {{input type="text" value=name}}
     
      <label>Enter Your Address:</label>
      {{input type="text" value=address}}
     
      <label>Enter Your City:</label>
      {{input type="text" value=city}}
     
      <label>Enter Your Zip Code:</label>
      {{input type="text" value=zipCode}}
     
      <label>Enter Your Email address:</label>
      {{input type="text" value=email}}
     
    </script>
     
    <script type="text/x-handlebars" data-template-name="components/v-card">
     
      <ul class="vcard">
        <li class="myname">{{myname}}</li>
        <li class="street-address">{{street-address}}</li>
        <li class="locality">{{locality}}</li>
        <li><span class="state">{{usState}}</span>, <span class="zip">{{zip}}</span></li>
        <li class="email">{{email}}</li>
      </ul>
     
    </script>

    CSS

    .vcard {
      border: 1px solid #dcdcdc;
      max-width: 12em;
      padding: 0.5em;
    }
     
    .vcard li {
      list-style: none;
    }
     
    .vcard .name {
      font-weight: bold;
    }
     
    .vcard .email {
      font-family: monospace;
    }
     
    label {
      display: block;
      margin-top: 0.5em;
    }

    JavaScript

    App = Ember.Application.create();
     
    App.ApplicationController = Ember.Controller.extend({
        name: 'Sourav',
        address: '123 M.G Road.',
        city: 'Kolkata',
        zipCode: '712248',
        email: 'me@me.com'
    });

    The component is defined by opening a new <script type="text/x-handlebars">, and setting its template name using the data-template-name attribute to be components/[NAME].

    We should note that the web components specification requires the name to have a dash in it in order to separate it from existing HTML tags.

    There is much more to it, I have just touched the surface. For more information, feel free to check out the Ember.js Guides.

  3. Live Editing Sass and Less in the Firefox Developer Tools

    Sass and Less are expressive languages that compile into CSS. If you’re using Sass or Less to generate your CSS, you might want to debug the source that you authored and not the generated CSS. Luckily you can now do this in the Firefox 29 developer tools using source maps.

    The Firefox developer tools use source maps to show the line number of rules in the original source, and let you edit original sources in the Style Editor. Here’s how to use the feature:

    1. Generate the source map

    When compiling a source to CSS, use the option to generate a sourcemap for each style sheet. To do this you’ll need Sass 3.3+ or Less 1.5+.

    Sass

    sass index.scss:index.css --sourcemap

    Less

    lessc index.less index.css --source-map

    This will create a .css.map source map file for each CSS file, and add a comment to the end of your CSS file with the location of the sourcemap: /*# sourceMappingURL=index.css.map */. The devtools will use this source map to map locations in the CSS style sheet to locations in the original source.

    2. Enable source maps in developer tools

    Right-click anywhere on the inspector’s rule view or in the Style Editor to get a context menu. Check off the Show original sources option:

    Enabling source maps in devtools

    Now CSS rule links will show the location in the original file, and clicking these links will take you to the source in the Style Editor:

    Original source showing in Style Editor

    3. Set up file watching

    You can edit original source files in Style Editor tool, but order to see the changes apply to the page, you’ll have to watch for changes to your preprocessed source and regenerate the CSS file each time it changes. To set watching up:

    Sass

    sass index.scss:index.css --sourcemap --watch

    Less

    For Less, you’ll have to set up another service to do the watching, like grunt.

    4. Save the original source

    Save the original source to your local file system by hitting the Save link or Cmd/Ctrl-S:

    Saving source to disk

    Saving source to disk

    The devtools will infer the location of the generated CSS file locally and watch that file for changes to update the live style sheet on the page.

    Now when you edit an original source and save it, the page’s style will update and you’ll get immediate feedback on your Sass or Less changes.

    The source has to be saved to disk and file watching set up in order for style changes to take effect.

  4. HTML5, CSS3, and the Bookmarklet that Shook the Web

    On Valentine’s Day last year we released a bookmarklet that went viral riding the popularity of the Harlem Shake meme. On the anniversary of its release we’d like to take a moment look back at the technical nuts and bolts of the bookmarklet as a case study in applying HTML5. In fact, the HTML, JavaScript, and CSS we used wouldn’t have worked on a single browser a few years ago. What follows is a technical discussion on how we took advantage of recent browser developments to shake up the web.

    Background

    Last year the Harlem Shake meme forced itself on to nearly every screen under the sun and, like everyone else, we had joked about doing our office version of the video. After tossing around a few bad video ideas, Ishan half-jokingly suggested a bookmarklet that made a web page do the Harlem Shake. Omar and Hari immediately jumped on the ingenuity of his idea and built a prototype within an hour that had the entire office LOLing. After pulling a classic all nighter we released it on February 14th, declaring “Happy Valentine’s Day, Internet! Behold, the Harlem Shake Bookmarklet”.

    Pretty soon it was picked up by news outlets like TechCrunch and HuffingtonPost, and our traffic skyrocketed. Meanwhile the bookmarklet offered a new avenue of expression in the watch-then-remix cycle that is the lifeblood of a viral meme like the Harlem Shake. Instead of creating a video of people dancing, developers could now remix this symbiotic meme in code. Startups like PivotDesk incorporated the bookmarklet into their websites, and HSMaker used the code to build a Harlem-Shake-As-A-Service website. Eventually, YouTube even built their own version as an easter egg on their site.

    So, how does it work?

    Once you click the Harlem Shake bookmark, a snippet of JS is evaluated on the webpage, just as you’d see by entering javascript:alert(“Hi MozHacks!”); in your address bar. This JavaScript will play the Harlem Shake audio, “shake” DOM nodes (according to timing events attached to the audio), and remove all DOM changes afterward.

    How did we attach the audio to the page and get the timing for the shakes just right?

    HTML5’s extensive audio support made this implementation fairly easy. All that was required was inserting an <audio> tag with the src pointed to the Harlem_Shake.ogg file. Once inserted into the DOM, the file would begin downloading, and playback begins once enough of the file has been buffered.

    HTML5 timed audio events allow us to know exactly when playback begins, updates, and ends. We attach a listener to the audio node which evaluates some JS once the audio reaches certain time. The first node starts shaking once the song is beyond 0.5s. Then, at 15.5s, we flash the screen and begin shaking all of the nodes. At 28.5s, we slow down the animations, and once the audio has ended, we stop all animations and clean up the DOM.

    audioTag.addEventListener("timeupdate", function() {
      var time = audioTag.currentTime,
          nodes = allShakeableNodes,
          len = nodes.length, i;
     
      // song started, start shaking first item
      if(time >= 0.5 && !harlem) {
        harlem = true;
        shakeFirst(firstNode);
      }
     
      // everyone else joins the party
      if(time >= 15.5 && !shake) {
        shake = true;
        stopShakeAll();
        flashScreen();
        for (i = 0; i < len; i++) {
          shakeOther(nodes[i]);
        }
      }
     
      // slow motion at the end
      if(audioTag.currentTime >= 28.4 && !slowmo) {
        slowmo = true;
        shakeSlowAll();
      }
    }, true);
     
    audioTag.addEventListener("ended", function() {
      stopShakeAll();
      removeAddedFiles();
    }, true);

    How did we choose which parts of the page to shake?

    We wrote a few helpers to calculate the rendered size of a given node, determine whether the node is visible on the page, and whether its size is within some (rather arbitrary) bounds:

    var MIN_HEIGHT = 30; // pixels
    var MIN_WIDTH = 30;
    var MAX_HEIGHT = 350;
    var MAX_WIDTH = 350;
     
    function size(node) {
      return {
        height: node.offsetHeight,
        width: node.offsetWidth
      };
    }
    function withinBounds(node) {
      var nodeFrame = size(node);
      return (nodeFrame.height > MIN_HEIGHT &&
              nodeFrame.height < MAX_HEIGHT &&
              nodeFrame.width > MIN_WIDTH &&
              nodeFrame.width < MAX_WIDTH);
    }
    // only calculate the viewport height and scroll position once
    var viewport = viewPortHeight();
    var scrollPosition = scrollY();
    function isVisible(node) {
      var y = posY(node);
      return (y >= scrollPosition && y <= (viewport + scrollPosition));
    }

    We got a lot of questions about how the bookmarklet was uncannily good at iniating the shake on logos and salient parts of the page. It turns out this was the luck of using very simple heuristics. All nodes are collected (via document.getElementsByTagName(“*”)) and we loop over them twice:

    1. On the first iteration, we stop once we find a single node that is within the bounds and visible on the page. We then start playing the audio with just this node shaking. Since elements are searched in the order they appear in the DOM (~ the order on the page), the logo is selected with surprising consistency.
    2. After inserting the audio, we have ~15 seconds to loop through all nodes to identify all shakeable nodes. These nodes get stored in an array, so that once the time comes, we can shake them.
    // get first shakeable node
    var allNodes = document.getElementsByTagName("*"), len = allNodes.length, i, thisNode;
    var firstNode = null;
    for (i = 0; i < len; i++) {
      thisNode = allNodes[i];
      if (withinBounds(thisNode)) {
        if(isVisible(thisNode)) {
          firstNode = thisNode;
          break;
        }
      }
    }
     
    if (thisNode === null) {
      console.warn("Could not find a node of the right size. Please try a different page.");
      return;
    }
     
    addCSS();
     
    playSong();
     
    var allShakeableNodes = [];
     
    // get all shakeable nodes
    for (i = 0; i < len; i++) {
      thisNode = allNodes[i];
      if (withinBounds(thisNode)) {
        allShakeableNodes.push(thisNode);
      }
    }

    How did we make the shake animations not lame?

    We utilized and tweaked Animate.css’s library to speed up the process, its light and easy to use with great results.

    First, all selected nodes gets a base class ‘harlem_shake_me’ that defines animation parameters for duration and how it should apply the styles.

    .mw-harlem_shake_me {
      -webkit-animation-duration: .4s;
         -moz-animation-duration: .4s;
           -o-animation-duration: .4s;
              animation-duration: .4s;
      -webkit-animation-fill-mode: both;
         -moz-animation-fill-mode: both;
           -o-animation-fill-mode: both;
              animation-fill-mode: both;
    }

    The second set of classes that defines the animation’s behavior are randomly picked and assigned to various nodes.

    @-webkit-keyframes swing {
      20%, 40%, 60%, 80%, 100% { -webkit-transform-origin: top center; }
      20% { -webkit-transform: rotate(15deg); } 
      40% { -webkit-transform: rotate(-10deg); }
      60% { -webkit-transform: rotate(5deg); }  
      80% { -webkit-transform: rotate(-5deg); } 
      100% { -webkit-transform: rotate(0deg); }
    }
     
    @-moz-keyframes swing {
      20% { -moz-transform: rotate(15deg); }  
      40% { -moz-transform: rotate(-10deg); }
      60% { -moz-transform: rotate(5deg); } 
      80% { -moz-transform: rotate(-5deg); }  
      100% { -moz-transform: rotate(0deg); }
    }
     
    @-o-keyframes swing {
      20% { -o-transform: rotate(15deg); }  
      40% { -o-transform: rotate(-10deg); }
      60% { -o-transform: rotate(5deg); } 
      80% { -o-transform: rotate(-5deg); }  
      100% { -o-transform: rotate(0deg); }
    }
     
    @keyframes swing {
      20% { transform: rotate(15deg); } 
      40% { transform: rotate(-10deg); }
      60% { transform: rotate(5deg); }  
      80% { transform: rotate(-5deg); } 
      100% { transform: rotate(0deg); }
    }
     
    .swing, .im_drunk {
      -webkit-transform-origin: top center;
      -moz-transform-origin: top center;
      -o-transform-origin: top center;
      transform-origin: top center;
      -webkit-animation-name: swing;
      -moz-animation-name: swing;
      -o-animation-name: swing;
      animation-name: swing;
    }

    Shake it like a polaroid picture

    What started a joke ended up turning into its own mini-phenomenon. The world has moved on from the Harlem Shake meme but the bookmarklet is still inspiring developers to get creative with HTML5.

    If you want to see the full source code or have suggestions, feel free to contribute to the Github repo!

  5. localForage: Offline Storage, Improved

    Web apps have had offline capabilities like saving large data sets and binary files for some time. You can even do things like cache MP3 files. Browser technology can store data offline and plenty of it. The problem, though, is that the technology choices for how you do this are fragmented.

    localStorage gets you really basic data storage, but it’s slow and can’t handle binary blobs. IndexedDB and WebSQL are asynchronous, fast, and support large data sets, but their APIs aren’t very straightforward. Even still, neither IndexedDB nor WebSQL have support from all of the major browser vendors and that doesn’t seem like something that will change in the near future.

    If you need to write a web app with offline support and don’t know where to start, then this is the article for you. If you’ve ever tried to start working with offline support but it made your head spin, this article is for you too. Mozilla has made a library called localForage that makes storing data offline in any browser a much easier task.

    around is an HTML5 Foursquare client that I wrote that helped me work through some of the pain points of offline storage. We’re still going to walk through how to use localForage, but there’s some source for those of you that like learn by perusing code.

    localForage is a JavaScript library that uses the very simple localStorage API. localStorage gives you, essentially, the features of get, set, remove, clear, and length, but adds:

    • an asynchronous API with callbacks
    • IndexedDB, WebSQL, and localStorage drivers (managed automatically; the best driver is loaded for you)
    • Blob and arbitrary type support, so you can store images, files, etc.
    • support for ES6 Promises

    The inclusion of IndexedDB and WebSQL support allows you to store more data for your web app than localStorage alone would allow. The non-blocking nature of their APIs makes your app faster by not hanging the main thread on get/set calls. Support for promises makes it a pleasure to write JavaScript without callback soup. Of course, if you’re a fan of callbacks, localForage supports those too.

    Enough talk; show me how it works!

    The traditional localStorage API, in many regards, is actually very nice; it’s simple to use, doesn’t enforce complex data structures, and requires zero boilerplate. If you had a configuration information in an app you wanted to save, all you need to write is:

    // Our config values we want to store offline.
    var config = {
        fullName: document.getElementById('name').getAttribute('value'),
        userId: document.getElementById('id').getAttribute('value')
    };
     
    // Let's save it for the next time we load the app.
    localStorage.setItem('config', JSON.stringify(config));
     
    // The next time we load the app, we can do:
    var config = JSON.parse(localStorage.getItem('config'));

    Note that we need to save values in localStorage as strings, so we convert to/from JSON when interacting with it.

    This appears delightfully straightforward, but you’ll immediately notice a few issues with localStorage:

    1. It’s synchronous. We wait until the data has been read from the disk and parsed, regardless of how large it might be. This slows down our app’s responsiveness. This is especially bad on mobile devices; the main thread is halted until the data is fetched, making your app seem slow and even unresponsive.

    2. It only supports strings. Notice how we had to use JSON.parse and JSON.stringify? That’s because localStorage only supports values that are JavaScript strings. No numbers, booleans, Blobs, etc. This makes storing numbers or arrays annoying, but effectively makes storing Blobs impossible (or at least VERY annoying and slow).

    A better way with localForage

    localForage gets past both these problems by using asynchronous APIs but with localStorage’s API. Compare using IndexedDB to localForage for the same bit of data:

    IndexedDB Code

    // IndexedDB.
    var db;
    var dbName = "dataspace";
     
    var users = [ {id: 1, fullName: 'Matt'}, {id: 2, fullName: 'Bob'} ];
     
    var request = indexedDB.open(dbName, 2);
     
    request.onerror = function(event) {
        // Handle errors.
    };
    request.onupgradeneeded = function(event) {
        db = event.target.result;
     
        var objectStore = db.createObjectStore("users", { keyPath: "id" });
     
        objectStore.createIndex("fullName", "fullName", { unique: false });
     
        objectStore.transaction.oncomplete = function(event) {
            var userObjectStore = db.transaction("users", "readwrite").objectStore("users");
        }
    };
     
    // Once the database is created, let's add our user to it...
     
    var transaction = db.transaction(["users"], "readwrite");
     
    // Do something when all the data is added to the database.
    transaction.oncomplete = function(event) {
        console.log("All done!");
    };
     
    transaction.onerror = function(event) {
        // Don't forget to handle errors!
    };
     
    var objectStore = transaction.objectStore("users");
     
    for (var i in users) {
        var request = objectStore.add(users[i]);
        request.onsuccess = function(event) {
            // Contains our user info.
            console.log(event.target.result);
        };
    }

    WebSQL wouldn’t be quite as verbose, but it would still require a fair bit of boilerplate. With localForage, you get to write this:

    localForage Code

    // Save our users.
    var users = [ {id: 1, fullName: 'Matt'}, {id: 2, fullName: 'Bob'} ];
    localForage.setItem('users', users, function(result) {
        console.log(result);
    });

    That was a bit less work.

    Data other than strings

    Let’s say you want to download a user’s profile picture for your app and cache it for offline use. It’s easy to save binary data with localForage:

    // We'll download the user's photo with AJAX.
    var request = new XMLHttpRequest();
     
    // Let's get the first user's photo.
    request.open('GET', "/users/1/profile_picture.jpg", true);
    request.responseType = 'arraybuffer';
     
    // When the AJAX state changes, save the photo locally.
    request.addEventListener('readystatechange', function() {
        if (request.readyState === 4) { // readyState DONE
            // We store the binary data as-is; this wouldn't work with localStorage.
            localForage.setItem('user_1_photo', request.response, function() {
                // Photo has been saved, do whatever happens next!
            });
        }
    });
     
    request.send()

    Next time we can get the photo out of localForage with just three lines of code:

    localForage.getItem('user_1_photo', function(photo) {
        // Create a data URI or something to put the photo in an img tag or similar.
        console.log(photo);
    });

    Callbacks and promises

    If you don’t like using callbacks in your code, you can use ES6 Promises instead of the callback argument in localForage. Let’s get that photo from the last example, but use promises instead of a callback:

    localForage.getItem('user_1_photo').then(function(photo) {
        // Create a data URI or something to put the photo in an <img> tag or similar.
        console.log(photo);
    });

    Admittedly, that’s a bit of a contrived example, but around has some real-world code if you’re interested in seeing the library in everyday usage.

    Cross-browser support

    localForage supports all modern browsers. IndexedDB is available in all modern browsers aside from Safari (IE 10+, IE Mobile 10+, Firefox 10+, Firefox for Android 25+, Chrome 23+, Chrome for Android 32+, and Opera 15+). Meanwhile, the stock Android Browser (2.1+) and Safari use WebSQL.

    In the worst case, localForage will fall back to localStorage, so you can at least store basic data offline (though not blobs and much slower). It at least takes care of automatically converting your data to/from JSON strings, which is how localStorage needs data to be stored.

    Learn more about localForage on GitHub, and please file issues if you’d like to see the library do more!

  6. CSS source map support, network performance analysis & more – Firefox Developer Tools Episode 29

    Firefox 29 was just uplifted to the Aurora release channel. This means that it is time to report some of the major changes that you can expect to see inside of the Developer Tools for this release.

    Better Looking Tools

    In addition to new features, we have been updating the look and feel of our dark and light themes. The light theme has been completely overhauled, and both themes feature a more consistent design throughout the toolbox. Your current theme can be changed from the Toolbox settings. (development notes)

    Network Monitor

    The Network Monitor now shows you how long it takes the browser to load different parts of your page. This will help measure the network performance of applications, both on first-run and with a primed cache. (development notes)

    To open the performance analysis tool, click the stopwatch icon in the network panel. For more information, watch the screencast below or read more on MDN.

    You can now copy an image request as a Data URI. Just right click on the image request, select the item from the context menu, and the Data URI will be on your clipboard. (development notes)

    Inspector

    We’ve updated the inspector highlighter behavior to bring the highlighting functionality more in line with other tools. (development notes)

    CSS transform preview tooltips have been added to the CSS rule view. Now, if you hover over a CSS transform, you will get a tooltip with a visualization of the transform. Grab a download of Firefox Nightly or Aurora and try it out on some live CSS transfom examples. (development notes)

    CSS rule view now supports pasting multiple CSS declarations at once, like background: #ccc; color: red. (development notes).

    Just like in the network panel, you can now copy <img> elements as Data URIs. (development notes)

    Style Editor

    CSS source map support has been added to the Style Editor. (development notes), and CSS properties and values will now be autocompleted in the Style Editor. (development notes)

    Want to read more? We have published a post with more information about using source maps in DevTools to live edit Sass and Less.

    Debugger

    We have added a classic call stack list in the debugger next to the list of sources. (development notes)

    There is a new ‘enable/disable all breakpoints’ button in the debugger. This will toggle the active state of all existing breakpoints at once, to allow switching between normal usage and debugging quickly. (development notes)

    You can now highlight and inspect DOM nodes from the debugger. If you hover a DOM node in the variables listing it will be highlighted on the page, and if you click on the inspect icon the node will be opened in the inspector tab. This feature is also available in the console output. (development notes)

    Pretty printing now preserves code comments. We are using the open source pretty-fast pretty printer, so it should be pretty fast. If it isn’t, be sure to let us know. (development notes)

    Console

    console.trace improvements. The call stack is shown inline with other output, and includes links to access each line in the debugger. (development notes)

    We’ve also improved console object output to show additional information based on the object type. (development notes)

    Code Editor

    The code editor can be seen throughout the tools in places like Scratchpad, Style Editor, and Debugger. Here are some of the updates you will see in this release:

    • Code folding in the editor. (development notes)
    • Emacs and VIM keybindings are now available in the code editor. To enable them, open about:config, and set “devtools.editor.keymap” to either “vim” or “emacs”, then restart DevTools. (development notes)
    • ES6 syntax highlighting support (development notes)

    Big thanks to all of our DevTools contributors this release (43 people)! Here is a list of all DevTools bugs resolved for Firefox 29.

    Do you have feedback, bug reports, feature requests, or questions? As always, you can comment here or get in touch with the team at @FirefoxDevTools.

  7. JavaScriptOO.com, to find what meets your JavaScript needs

    The JavaScript Renaissance

    We all know the major players in JavaScript projects. MV* frameworks like AngularJS, Backbone, and Ember.js are inspiring a whole new breed of client applications. Utility libraries like underscore and lodash simplify constructs once reserved for academic exercise. And of course, the monolithic namespace jQuery is everywhere. The large teams and growing communities behind these projects (a little corporate backing never hurts) are moving forward and providing very solid platforms for developers to build upon. However, they are merely a precursor for the renaissance that is happening in the world of JavaScript right now.

    Enter the micro-libraries, the drop-in replacements, and the “I-Had-No-Idea-JS-Could-Do-That” projects. Thanks to tooling like Grunt, bower, and npm, testing suites like Jasmine and QUnit, and of course the social coding site github; dozens of peer-reviewed and test-driven JavaScript libraries are sprouting up every day. Fresh approaches on everything from the core JavaScript functionality to abstractions of the ridiculously complex are in abundance and expanding the very foundation of the web.

    VerbalExpression lets you write regular expressions in English; Knwl.js is a natural language processor; 140medley is an entire framework in 821 bytes. Want a DOM selector engine other than sizzle? Try micro-selector, nut, zest, qwery, Sly, or Satisfy. Need a templating engine? Try T-Lite, Grips, gloomy, Transparency, dust, hogan.js, Tempo, Plates, Mold, shorttag, doT.js, t.js, Milk, or at least 10 others. Dates got you down? Check out Date-Utils, moment.js, datejs, an.hour.ago, time.js. Route with Pilot, filter images with CamanJS, write games in Crafty, or make a presentation with RevealJS or impress.js.

    Of course, along with this prolific creativity in the JS universe comes some serious overload. A bit of natural selection will eventually get the best of these projects on your radar, but if you want to see the really exciting bits of evolution occurring you have to watch. Constantly.

    JavaScriptOO.com

    Watching constantly is exactly what I do with JavaScriptOO.com. I watch, I lurk, I read, and eventually I find something that really inspires me.

    The elevator pitch for the site is that it is a directory of JavaScript libraries with examples, CDN links, statistics, and sometimes videos about each library.

    Behind the scenes, after sifting through github, twitter, hacker news, pineapple, and an endless stream of sites and finding something exciting, I begin the slow process of adding a library to the site. Slow is a relative term, but for me, in this context, it means anywhere from 30 minutes to a few days. Adding a library to the site is a purposefully manual process that requires I actually spend some time with the library, writing an example for it, categorizing it as best I can, and sometimes even creating a video about it.

    This slow process is a huge bottleneck for updates on JSOO, and boy, do I hear about it. However, it also keeps the site from becoming just a directory of github links and it keeps the single curator excited about maintaining the site.

    Examples and submitting your library

    There are currently 401 405 409 examples on the site… almost one for every day it has been online. There are 79 libraries in the “Needed Examples” section where visitors can submit a gist or fiddle for that library and are encouraged to “include your Twitter handle or any other marketing you may like to, but keep it simple”. Lastly, there is a section for submitting your own library. Not all libraries submitted are added to the site, but they are given immediate priority, and if they are a fit, added to the queue. There is no editorial, no blog, no opinion at all other than hoping every visitor feels like this:

    Beyond the very manual process of adding a library, the site is also a chance for me to experiment with all sorts of tech and see in real time how it performs under a moderate load. Originally launched as a .NET application most of what you see today is running node.js under iisnode using Express w/ Jade templates (moving to doT.js as I write), a gulpjs build process, a homegrown CMS using AngularJS and VB.NET (gasp!), and a Lucene.NET search application in C#.

  8. Five Potential Privacy Pitfalls for App Developers

    Fighting for data privacy — making sure people know who has access to their data, where it goes or could go, and that they have a choice in all of it — is part of Mozilla’s DNA. Privacy is an integral part of building an Internet where people come first.

    “Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional.” ~ Mozilla Manifesto (principle #4)”

    On January 28th, the world will celebrate Data Privacy Day — an international holiday intended to raise awareness and promote data privacy education. It is officially recognized in the United States, Canada, and 27 European countries.

    As part of our own Data Privacy Day celebrations this year, we have created a developer-specific list of Privacy Pitfalls to watch out for:

    1. More isn’t always better

    Collecting lots of data is seductive but in a privacy context can create risk (the more you have, the more you need to protect) and a lot of additional work. A privacy policy, for example, may require you to publicly share:

    • What personal information your app collects
    • Why you are collecting it
    • Where it will be stored (on the device or elsewhere)
    • Who it will be shared with and why
    • How long you will keep it
    • How a user can have their data removed

    What’s more, you are on the hook to make sure all of the things you’ve communicated are really happening.

    The key is to collect only what you need. When you are in the planning stages for your app, document the data collection, usage and flows. You should be able to justify each piece of personal information and describe how it will be used. If you plan to collect personal information for future or extra features beyond core functions, always give users the ability to opt-out.

    Finally, know which types of data collection require informed consent such as information about a user’s movements and activities through the use of location and movement sensors, sound, or activation of the device camera.

    2. Avoid treating your user’s contacts as your own

    “Share” buttons and social media sign-in widgets are ubiquitous in today’s apps and Web sites. And while these buttons may make it easier for users to share information, they are not an all-access pass into your user’s address book.

    Respect for the people who use your software; allowing them to control what’s being shared and with whom, builds trust and loyalty. These two things have the potential to be exponentially more valuable than a random connection with a stranger.

    3. Provide a fair trade for the data you collect

    User data is undeniably valuable and collecting it isn’t inherently wrong, especially with consent. Oftentimes though, users are asked to trade their valuable personal data without much in return (sometimes, as in the address book example above, they may not even know they’re giving you anything).

    Collecting data with a fair trade mindset — making sure the people who give you their information are getting something in return (features, a personalized experience, etc.) helps the user feel respected and in control — resulting in an invaluable sense of trust and loyalty.

    4. Understand all the privacy conditions you yourself are agreeing to

    If you use third party services, like analytics or ad networks, make sure you are aware of their data collection practices as well as your own — they could impact your users. Third party code or software development kits can contain code you aren’t aware of. Providing accurate information to your users on the data collected by these organizations should be part of your privacy policy, and disclosed as if you were collecting it yourself.

    Best to identify third parties by name and to link to information about how to modify or delete the data they collect. You should also consider providing a means for your users to opt out of such tracking.

    5. There is no “one policy fits all” when it comes to privacy

    Despite your best intentions to respect user privacy, legal requirements and user expectations can vary widely – a challenge made especially acute now that apps are available to a global audience. While we can’t give you legal advice, we can share some of the nuggets we’ve found through our user research in different countries:

    • In the US, non-technical consumers care more about their social circle tracking their online behavior than companies or the government.
    • In Thailand, relatives share and swap devices freely with each other, with little desire to create individual accounts or erase their personal data.
    • In Germany and most of Europe, consumers are quite sensitive about sharing their personal information with companies and governments, and those countries have strict laws to reflect that stance.
    • In Brazil, the middle class is more concerned about thieves physically stealing their devices (particularly mobile phones) than about online piracy.

    Ultimately, talking to real users first can go a long way in making an app that truly reflects their privacy concerns. Engaging with real users not only reveals unique behaviors, but also digs into the motivations and emotions that drive these preferences.

    In our experience, it can be hard for users to articulate how they feel about privacy. With the exception of privacy-sensitive countries/individuals, “privacy” may not always be top of mind. Rather than using the term “privacy” when talking to users, we’ve had success asking specific questions, such as who the user feels comfortable sharing personal information with and why, rather than to trying to get the participant to talk about privacy in an abstract way.

    Do you have experiences related to these pitfalls? What others have you encountered in your work? Let us know in the comments!

  9. WebGL Deferred Shading

    WebGL brings hardware-accelerated 3D graphics to the web. Many features of WebGL 2 are available today as WebGL extensions. In this article, we describe how to use the WEBGL_draw_buffers extension to create a scene with a large number of dynamic lights using a technique called deferred shading, which is popular among top-tier games.

    live demosource code

    Today, most WebGL engines use forward shading, where lighting is computed in the same pass that geometry is transformed. This makes it difficult to support a large number of dynamic lights and different light types.

    Forward shading can use a pass per light. Rendering a scene looks like:

    foreach light {
      foreach visible mesh {
        if (light volume intersects mesh) {
          render using this material/light shader;
          accumulate in framebuffer using additive blending;
        }
      }
    }

    This requires a different shader for each material/light-type combination, which adds up. From a performance perspective, each mesh needs to be rendered (vertex transform, rasterization, material part of the fragment shader, etc.) once per light instead of just once. In addition, fragments that ultimately fail the depth test are still shaded, but with early-z and z-cull hardware optimizations and a front-to-back sorting or a z-prepass, this not as bad as the cost for adding lights.

    To optimize performance, light sources that have a limited effect are often used. Unlike real-world lights, we allow the light from a point source to travel only a limited distance. However, even if a light’s volume of effect intersects a mesh, it may only affect a small part of the mesh, but the entire mesh is still rendered.

    In practice, forward shaders usually try to do as much work as they can in a single pass leading to the need for a complex system of chaining lights together in a single shader. For example:

    foreach visible mesh {
      find lights affecting mesh;
      Render all lights and materials using a single shader;
    }

    The biggest drawback is the number of shaders required since a different shader is required for each material/light (not light type) combination. This makes shaders harder to author, increases compile times, usually requires runtime compiling, and increases the number of shaders to sort by. Although meshes are only rendered once, this also has the same performance drawbacks for fragments that fail the depth test as the multi-pass approach.

    Deferred Shading

    Deferred shading takes a different approach than forward shading by dividing rendering into two passes: the g-buffer pass, which transforms geometry and writes positions, normals, and material properties to textures called the g-buffer, and the light accumulation pass, which performs lighting as a series of screen-space post-processing effects.

    // g-buffer pass
    foreach visible mesh {
      write material properties to g-buffer;
    }
     
    // light accumulation pass
    foreach light {
      compute light by reading g-buffer;
      accumulate in framebuffer;
    }

    This decouples lighting from scene complexity (number of triangles) and only requires one shader per material and per light type. Since lighting takes place in screen-space, fragments failing the z-test are not shaded, essentially bringing the depth complexity down to one. There are also downsides such as its high memory bandwidth usage and making translucency and anti-aliasing difficult.

    Until recently, WebGL had a roadblock for implementing deferred shading. In WebGL, a fragment shader could only write to a single texture/renderbuffer. With deferred shading, the g-buffer is usually composed of several textures, which meant that the scene needed to be rendered multiple times during the g-buffer pass.

    WEBGL_draw_buffers

    Now with the WEBGL_draw_buffers extension, a fragment shader can write to several textures. To use this extension in Firefox, browse to about:config and turn on webgl.enable-draft-extensions. Then, to make sure your system supports WEBGL_draw_buffers, browse to webglreport.com and verify it is in the list of extensions at the bottom of the page.

    To use the extension, first initialize it:

    var ext = gl.getExtension('WEBGL_draw_buffers');
    if (!ext) {
      // ...
    }

    We can now bind multiple textures, tx[] in the example below, to different framebuffer color attachments.

    var fb = gl.createFramebuffer();
    gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
    gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT0_WEBGL, gl.TEXTURE_2D, tx[0], 0);
    gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT1_WEBGL, gl.TEXTURE_2D, tx[1], 0);
    gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT2_WEBGL, gl.TEXTURE_2D, tx[2], 0);
    gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT3_WEBGL, gl.TEXTURE_2D, tx[3], 0);

    For debugging, we can check to see if the attachments are compatible by calling gl.checkFramebufferStatus. This function is slow and should not be called often in release code.

    if (gl.checkFramebufferStatus(gl.FRAMEBUFFER) !== gl.FRAMEBUFFER_COMPLETE) {
      // Can't use framebuffer.
      // See http://www.khronos.org/opengles/sdk/docs/man/xhtml/glCheckFramebufferStatus.xml
    }

    Next, we map the color attachments to draw buffer slots that the fragment shader will write to using gl_FragData.

    ext.drawBuffersWEBGL([
      ext.COLOR_ATTACHMENT0_WEBGL, // gl_FragData[0]
      ext.COLOR_ATTACHMENT1_WEBGL, // gl_FragData[1]
      ext.COLOR_ATTACHMENT2_WEBGL, // gl_FragData[2]
      ext.COLOR_ATTACHMENT3_WEBGL  // gl_FragData[3]
    ]);

    The maximum size of the array passed to drawBuffersWEBGL depends on the system and can be queried by calling gl.getParameter(gl.MAX_DRAW_BUFFERS_WEBGL). In GLSL, this is also available as gl_MaxDrawBuffers.

    In the deferred shading geometry pass, the fragment shader writes to multiple textures. A trivial pass-through fragment shader is:

    #extension GL_EXT_draw_buffers : require
    precision highp float;
    void main(void) {
      gl_FragData[0] = vec4(0.25);
      gl_FragData[1] = vec4(0.5);
      gl_FragData[2] = vec4(0.75);
      gl_FragData[3] = vec4(1.0);
    }

    Even though we initialized the extension in JavaScript with gl.getExtension, the GLSL code still needs to include #extension GL_EXT_draw_buffers : require to use the extension. With the extension, the output is now the gl_FragData array that maps to framebuffer color attachments, not gl_FragColor, which is traditionally the output.

    g-buffers

    In our deferred shading implementation the g-buffer is composed of four textures: eye-space position, eye-space normal, color, and depth. Position, normal, and color use the floating-point RGBA format via the OES_texture_float extension, and depth uses the unsigned-short DEPTH_COMPONENT format.

    Position texture

    Normal texture

    Color texture

    Depth texture

    Light accumulation using g-buffers

    This g-buffer layout is simple for our testing. Although four textures is common for a full deferred shading engine, an optimized implementation would try to use the least amount of memory by lowering precision, reconstructing position from depth, packing values together, using different distributions, and so on.

    With WEBGL_draw_buffers, we can use a single pass to write each texture in the g-buffer. Compared to using a single pass per texture, this improves performance and reduces the amount of JavaScript code and GLSL shaders. As shown in the graph below, as scene complexity increases so does the benefit of using WEBGL_draw_buffers. Since increasing scene complexity requires more drawElements/drawArrays calls, more JavaScript overhead, and transforms more triangles, WEBGL_draw_buffers provides a benefit by writing the g-buffer in a single pass, not a pass per texture.

    All performance numbers were measured using an NVIDIA GT 620M, which is a low-end GPU with 96 cores, in FireFox 26.0 on Window 8. In the above graph, 20 point lights were used. The light intensity decreases proportionally to the square of the distance between the current position and the light position. Each Stanford Dragon is 100,000 triangles and requires five draw calls so, for example, when 25 dragons are rendered, 125 draw calls (and related state changes) are issued, and a total of 2,500,000 triangles are transformed.


    WEBGL_draw_buffers test scene, shown here with 100 Stanford Dragons.

    Of course, when scene complexity is very low, like the case of one dragon, the cost of the g-buffer pass is low so the savings from WEBGL_draw_buffers are minimal, especially if there are many lights in the scene, which drives up the cost of the light accumulation pass as shown in the graph below.

    Deferred shading requires a lot of GPU memory bandwidth, which can hurt performance and increase power usage. After the g-buffer pass, a naive implementation of the light accumulation pass would render each light as a full-screen quad and read the entirety of each g-buffer. Since most light types, like point and spot lights, attenuate and have a limited volume of effect, the full-screen quad can be replaced with a world-space bounding volume or tight screen-space bounding rectangle. Our implementation renders a full-screen quad per light and uses the scissor test to limit the fragment shader to the light’s volume of effect.

    Tile-Based Deferred Shading

    Tile-based deferred shading takes this a step farther and splits the screen into tiles, for example 16×16 pixels, and then determines which lights influence each tile. Light-tile information is then passed to the shader and the g-buffer is only read once for all lights. Since this drastically reduces memory bandwidth, it improves performance. The following graph shows performance for the sponza scene (66,450 triangles and 38 draw calls) at 1024×768 with 32×32 tiles.

    Tile size affects performance. Smaller tiles require more JavaScript overhead to create light-tile information, but less computation in the lighting shader. Larger tiles have the opposite tradeoff. Therefore, choosing a suitable tile is important for the performance. The figure below is shown the relationship between tile size and performance with 100 lights.

    A visualization of the number of lights in each tile is shown below. Black tiles have no lights intersecting them and white tiles have the most lights.


    Shaded version of tile visualization.

    Conclusion

    WEBGL_draw_buffers is a useful extension for improving the performance of deferred shading in WebGL. Checkout the live demo and our code on github.

    Acknowledgements

    We implemented this project for the course CIS 565: GPU Programming and Architecture, which is part of the computer graphics program at the University of Pennsylvania. We thank Liam Boone for his support and Eric Haines and Morgan McGuire for reviewing this article.

    References

  10. Localizing the Firefox OS Boilerplate App

    As Firefox OS devices are launched in more and more countries and apps become available to users of all different languages, it becomes increasingly important to consider localizing your app. Making your app available in more languages is one of the best ways to make your app available and relevant to more users.

    As such, we’re piloting a program to help developers not only localize their apps, but also connect them with localizers. If you’re an interested developer or localizer, please get in touch and we’ll tell you more about how to join our program.

    Localizing is generally a straight-forward process, and consists of two main steps:

    1. preparing your app for localization (sometimes called internationalization)
    2. obtaining translations of relevant app content

    There are many approaches to localization. Many of you are probably already familiar with gettext-based libraries. Jed is one such implementation specifically designed for html and JavaScript-based applications. Mozilla is also leading a next-generation localization project called L20n. In this article, we build upon the method we previously mentioned using the webl10n.js library also employed by Gaia, the UI layer for Firefox OS and show you how we localized the Firefox OS Boilerplate App.

    Firefox OS Boilerplate App, localized in French
    Firefox OS Boilerplate App, localized in French

    Setup

    If you’d like to follow along, clone the Firefox OS Boilerplate App, which is available on Github:

    clone https://github.com/robnyman/Firefox-OS-Boilerplate-App.git

    As mentioned, we used the version of Fabien Cazenave’s webL10n.js library that is available in the Gaia source tree (direct link). In your own projects, feel free to use either version. Not sure which to use? Read this.

    First, I navigated to the directory that contains all application JavaScript and then download l10n.js:

    [Firefox OS App Boilerplate/js]$
    wget https://github.com/mozilla-b2g/gaia/blob/master/shared/js/l10n.js

    Then I created a directory to contain all locale information:

    [Firefox OS App Boilerplate/]$
    mkdir locales

    This is where we store files that contain both the base, and the translated strings for the Boilerplate App.

    Now, in that directory, I created the locales initialization file. You can name this anything you like. I selected locales.ini, which is what many of the Gaia apps use:

    [Firefox OS App Boilerplate/locales]$
    touch locales.ini

    In that file, I began by specifying which property file to import for the default language, which in our case is en-US:

    @import url(en-us/app.properties)

    The import line for your default locale should always be the first line of this file.

    It’s up to you how to name and organize your properties files. For the Firefox OS Boilerplate, I chose to have each property file be named app.properties and be stored in its own directory, name for its locale. Here’s what the beginning of the locales/ directory looks like:

    ├── ar
    │   ├── app.properties
    │   └── manifest.properties
    ├── de
    │   ├── app.properties
    │   └── manifest.properties
    ├── el
    │   └── manifest.properties
    ├── en-US
    │   ├── app.properties
    │   └── manifest.properties

    This structure makes working with our chosen translation platform, Transifex, easier. We’ll explain Transifex along with those manifest.properties files a bit later on.

    As new locales are added, the locales.ini file needs to be updated. Here are lines I added for the ar and de locales:

    [ar]
    @import url(ar/app.properties)
    [de]
    @import url(de/app.properties)

    Each import statement needs to be preceded by a heading with the relevant locale code in square brackets. This tells l10n.js which import statements to use for which locales.

    Next, I edited the main app file, index.html, to include the l10n.js library and to specify which initialization file to use. In the head section, I added:

    <!-- Localization -->
    <link rel="resource" type="application/l10n" href="locales/locales.ini" />
    <script type="application/javascript" src="js/l10n.js"></script>

    At this point I checked to make sure everything was still loading as expected, which it was. The easiest way to check your work as you go is to use the Firefox OS Simulator.

    Indicating translatable content with data-l10n-id tags

    One advantage of using l10n.js is that any elements specified as needing translation are translated automatically upon page load without the user having to select a locale. The library determines the locale based on the system locale (e.g., which locale the user selected when they completed the first run experience).

    The way that you specify which elements should be translated is by giving them data-l10n-id attributes. For example, I indicated that the h1 heading "Firefox OS Boilerplate App" needed translation with the following:

    <h2 data-l10n-id="app-heading">
        Firefox OS Boilerplate App
    </h2>

    The value that you give this data-l10n-id attribute will become a key in your properties file. In en-us/app.properties, I added a corresponding line for this heading:

    app-heading  = Firefox OS Boilerplate App

    For your default locale, the value assigned to this key will be identical to what it is in your app. But for other locales, it will be translated. Here’s what the corresponding line looks like for de/app.properties:

    app-heading  = Firefox OS Boilerplate App in Deutsch!

    Any element with text that you want to localize can and should be given an data-l10n-id attribute and then a corresponding line should be added to your default locale properties file. This can be tedious to do after you have created a lot of application code, so you might consider writing a script to help you out. (I have one in the works, but it’s not fit for consumption yet).

    Retrieving translated strings with JavaScript

    Sometimes you will need to access localized strings within JavaScript. When you do this, in order to be sure the L10n.js library is available, you’ll need to wrap your functions with the the navigator.mozL10n.ready() function. The following code snippet enables the localization of a notification that is created via JavaScript:

    navigator.mozL10n.ready ( function () {
        // grab l10n object
        var _ = navigator.mozL10n.get;
        // Notifications
        var addNotification = document.querySelector("#add-notification");
        if (addNotification) {
            addNotification.onclick = function () {
                var appRef = navigator.mozApps.getSelf(),
                    icon;
                appRef.onsuccess = function (evt) {
                    icon = appRef.result.manifest.icons["16"];
                    console.log("I am: " + icon);
                    var notification = navigator.mozNotification.createNotification(
                        _('notification-text'),
                        icon,
                        icon
                    );
                    notification.show();
                    var img = document.createElement("img");
                    img.src = icon;
                    document.body.appendChild(img);
                };
            };
        }
    });

    If you don’t follow this method, you can’t be sure that your JavaScript will be parsed after the l10n library is completely initialized.

    Localizing the manifest

    After tagging all elements with data-l10n-id attributes and adding corresponding lines to my en-us/app.properties file, it was then time to make some changes to the manifest.webapp.

    The first change was to make sure a default locale was specified:

    "default_locale": "en",

    This property refers to the locale of the manifest itself. It is used primarily by the Marketplace to determine how to parse the manifest. It is not used by the device the app is installed on or the Firefox OS simulator. MDN recommends that whatever locale is used here is not included in the ‘locales’ property. Instead, the name and description for the default locale is specified in ‘name’ and ‘description’ fields which are always required.

    Next, I added a ‘locales’ property:

    "locales": {
    }

    As new locales are added, it’s necessary to update the locales property with a localized version of the app name and description. These values are used not only by the Firefox OS user interface, but also the Firefox Marketplace. To accomplish this, I created separate manifest.properties files in each of my locales directories. Having separate files for each locales makes it easier for localizers to work on the project and also makes it easier for me to manage. When a localizer completes a new locale, I copy the values to the manifest.webapp file. This is something that could be easily scripted, however.

    This is the translated de/manifest.properties file:

    name=Firefox OS Boilerplate App in Deutsch!
    description=Boilerplate Firefox OS App mit Beispiel Anwendungsfälle, um loszulegen

    And the updated ‘locales’ property in manifeset.webapp:

    "locales": {
      "de": {
        "name": "Firefox OS Boilerplate App",
        "description": "Boilerplate Firefox OS App mit Beispiel Anwendungsfälle, um loszulegen"
      }
    }

    Managing Localization Process with Transifex

    With the Boilerplate, we started a pilot program to explore the use of Transifex for managing translations. If you visit our team page there, you’ll see the Boilerplate along with a handful of other apps from developers who have joined us there.

    Why Transifex?

    In looking at l10n platforms, we wanted one that would support both paid and volunteer translations as well as one that would support a wide range of localization formats and workflows. Transifex fit that profile. We’re also very excited about Pontoon a platform currently in development by our l10n team and look forward to using with the Boilerplate and other apps when it’s ready.

    Creating the project

    Once signed up and logged in to Transifex, creating a project is easy. You specify a project name, description, source language and license. If you indicate that your license is open source, you’ll be prompted for the link to your app’s source code.


    Configuring Transifex (tx config)

    I like to work from the command line, so I used the Transifex client (installation details) for the following steps, but you can do the following steps from the Transifex website as well.

    Working in the root of my Firefox OS Boilerplate App directory, I first initiated the Transifex project:

    [Firefox OS App Boilerplate/]$
    tx init

    This command creates a .tx directory and a config file within it.

    When first setting up a project to work with Transifex, you’ll need to set some values in this config file.

    The .tx/config file for the Boilerplate looks like this:

    [main]
    host = https://www.transifex.com
     
    [firefox-os-boilerplate.app_properties]
    file_filter = locales/<lang>/app.properties
    source_file = locales/en-US/app.properties
    source_lang = en
    type = MOZILLAPROPERTIES
    minimum_perc = 50
     
    [firefox-os-boilerplate.manifest_properties]
    file_filter = locales/<lang>/manifest.properties
    source_file = locales/en-US/manifest.properties
    source_lang = en
    type = MOZILLAPROPERTIES
    minimum_perc = 50

    Each block of the config is indicted with square brackets. A project on Transifex can have any number of resources, so you can organize your app in the way that you like.

    For the Boilerplate, we have two resources:

    • firefox-os-boilerplate.app_properties, which maps to the file app.properties and includes all of the strings from the app that we want to localize.
    • firefox-os-boilerplate.manifest_properties, which maps to the file manifest.properties and includes the localize name and description that we’ll copy to the manifest.webapp

    Resources are also listed in the Transifex web interface:


    In Transifex, each resource will be copied when a new language is requested. Translators then check out those files, edit them to include their translations and then check them back in when they are done.

    Other options are:

    • file_filter: Tells Transifex how you have organized your locale files. For the boilerplate, I wanted each property file to have the same name and be sorted into directories named after each locale. Transifex substitues locale for to acheive this.
    • source_file: Tells transifex what is the source of strings (default locale).
    • source_lang: Indicates the locale of the source file (e.g the default project locale).
    • type: Indicates what file type you are using for translation. Transifex supports a number of options.
    • minimum_perc: Sets the threshold value for when Transifex will pull in a new locale. 50 means that a locale must be at least 50% complete before Transifex will pull the locale into your project.

    Once the config file was completed, I pushed it to Transifex with:

    [Firefox OS App Boilerplate/]$
    tx push -s
    Pushing translations for resource firefox-os-boilerplate.app_properties:
    Pushing source file (locales/en-US/app.properties)
    Resource does not exist.  Creating...
    Pushing translations for resource firefox-os-boilerplate.manifest_properties:
    Pushing source file (locales/en-US/manifest.properties)
    Resource does not exist.  Creating...
    Done.

    Workflow

    Once the Boilerplate project was setup to use Transifex, I was able to use the client to pull and push new / updated translations:

    [Firefox OS App Boilerplate/]$
    tx pull
    Pulling translations for resource firefox-os-boilerplate.app_properties (source: locales/en-US/app.properties)
     -> ar: locales/ar/app.properties
     -> id: locales/id/app.properties
    Pulling translations for resource firefox-os-boilerplate.manifest_properties (source: locales/en-US/manifest.properties)
    Done.

    Transifex works seemlessly with any revision control system. We use git for the Firefox OS Boilerplate, and our workflow looks like:

    1. Use tx pull to bring in new translations from Transifex (could also download them via the web interface).
    2. Commit and push changes vwith git.
    3. Repeat.

    We can also accept localizations via git and them push them to Transifex.

    Note: When using Transifex, I recommend that you keep your .tx config directory in your project’s code repo. You’ll want anyone checking out your project to use this information to properly sync with Transifex. No secret information is contained in .tx/config (rather, that’s in ~/.transifexrc).

    Invitation to participate!

    If you are a developer interested in localizing your app, or a localizer interested in contributing translations, we’d love to hear from you! We also invite you to join our team on Transifex if you’d like to connect with other developers and translators who are working on localization of Firefox OS apps.