Mozilla

Articles

Sort by:

View:

  1. The Translation of the Firetext App

    The History

    Firetext is an open-source word processor. The project was started in early 2013 by @HRanDEV, @logan-r, and me (@Joshua-S). The goal of the project was to provide a user-friendly editing experience, and to fill a major gap in functionality on Firefox OS.

    Firetext 0.3Firetext 0.3

    In the year since its initiation, Firetext became one of the top ten most popular productivity apps in the Firefox Marketplace. We made a myriad of new additions, and Firetext gained Dropbox integration, enabling users to store documents in the cloud. We also added Night Mode, a feature that automatically adjusts the interface to the surrounding light levels. There was a new interface design, better performance, and web activity support.

    Even with all of these features, Firetext’s audience was rather small. We had only supported the English language, and according to Exploredia, only 17.65% of the world’s population speak English fluently. So, we decided to localize Firetext.

    The Approach

    After reading a Hacks post about Localizing Firefox Apps, we determined to use a combination of webL10n and Google Translate as our localization tools. We decided to localize in the languages known by our contributors (Spanish and German), and then use Google Translate to do the others. Eventually, we planned to grow a community that could contribute translations, instead of just relying on the often erratic machine translations.

    The Discovery

    A few months passed, and still no progress. The task was extremely daunting, and we did not know how to proceed. This stagnation continued until I stumbled upon a second Hacks post, Localizing the Firefox OS Boilerplate App.

    It was like a dream come true. Mozilla had started a program to help smaller app developers with the localization process. We could benefit from their larger contributor pool, while helping them provide a greater number of apps to foreign communities.

    I immediately contacted Mozilla about the program, and was invited to set up a project on Transifex. The game was on!

    The Code

    I started by creating a locales directory that would contain our translation files. I created a locales.ini file in that directory to show webL10n where to find the translations. Finally, I added a folder for each locale.

    locales.ini - Firetext

    I then tagged each translatable element in the html files with a data-l10n-id attribute, and localized alert()s and our other scripted notifications by using webL10n’s document.webL10n.get() or _() function.

    It was time to add the translations. I created a app.properties file in the locales/en_US directory, and referenced it from locales.ini. After doing that, I added all of the strings that were supposed to be translated.

    app.properties - Firetext

    webL10n automatically detects the user’s default locale, but we also needed to be able to change locales manually. To allow this, I added a select in the Firetext settings panel that contained all of the prospective languages.Settings - Firetext

    Even after all of this, Firetext was not really localized; we only had an English translation. This is where Transifex comes into the picture.

    The Translation

    I created a project for Firetext on Transifex, and then added a team for each language on our GitHub issue. I then uploaded the app.properties file as a resource.

    I also uploaded the app description from our manifest.webapp for translation as a separate resource.

    Firetext on Transifex

    Within hours, translations came pouring in. Within the first week, Hebrew, French, and Spanish were completely translated! I added them to our GitHub repository by downloading the translation properties file, and placing it in the appropriate locale directory. I then enabled that language in the settings panel. The entire process was extremely simple and speedy.

    The Submission

    Now that Firetext had been localized, I needed to submit it back to the Mozilla Marketplace.  This was a fairly straight forward process; just download the zip, extract git files, and add in the API key for our error reporting system.

    In less than one day, Firetext was approved, and made available for our global user base.  Firetext is now available in eight different languages, and I can’t wait to see the feedback going forward!

    The Final Thoughts

    In retrospect, probably the most difficult part of localizing Firetext was supporting RTL (Right To Left) languages.  This was a bit of a daunting task, but the results have totally been worth the effort!  All in all, localization was one of the easiest features that we have implemented.

    As Swiss app developer Gerard Tyedmers, creator of grrd’s 4 in a Row and grrd’s Puzzle, said:

    “…I can say that localizing apps is definitely worth the work. It really helps finding new users.

    The l10n.js solution was a very useful tool that was easy to implement. And I am very happy about the fact that I can add more languages with a very small impact on my code…”

    I couldn’t agree more!

    Firetext Editor - Spanish

    Editor’s Note: The Invitation

    Have a great app like Firetext?  You’re invited too!  We encourage you to join Mozilla’s app localization project on Transifex. With a localized app, you can extend your reach to include users from all over the world, and by so doing, help to support a global base of open web app users.

    For translators, mobile app localization presents some interesting translation and interface design challenges. You’ll need to think of the strings you’re working with in mobile scale, as interaction elements on a small screen. The localizer plays an important role in creating an interface that people in different countries can easily use and understand.  Please, get involved with Firetext or one of our other projects.

    This project is just getting started, and we’re learning as we go. If you have questions or issues not addressed by existing resources such the Hacks blog series on app localization, Transifex help pages, and other articles and repos referenced above, you can contact us. We’ll do our best to point you in the right direction. Thanks!

  2. Lessons learnt building ViziCities

    Just over 2 weeks ago Peter Smart and Robin Hawkes released the first version of ViziCities to the world. It’s free to use and open-sourced under an MIT license.

    In this post I will talk to you about the lessons learnt during the development of ViziCities. From application architecture to fine-detailed WebGL rendering improvements, we learnt a lot in the past year and we hope that by sharing our experiences we can help others avoid the same mistakes.

    What is ViziCities?

    In a rather geeky nutshell, ViziCities is a WebGL application that allows you to visualise anywhere in the world in 3D. It’s primary purpose is to look at urban areas, though it’ll work perfectly fine in other places if they have decent OpenStreetMap coverage.

    Demo

    The best way to explain what ViziCities does is to try it out yourself. You’ll need a WebGL-enabled browser and an awareness that you’re using pre-alpha quality software. Click and drag your way around, zoom in using the mouse wheel, and rotate the camera by clicking the mouse wheel or holding down shift while clicking and dragging.

    You can always take a look at this short video if you’re unable to try the demo right now:

    What’s the point of it?

    We started the project for multiple reasons. One of those reasons is that it’s an exciting technical and design challenge for us – both Peter and I thrive on pushing ourselves to the limits by exploring something unknown to us.

    Another reason is that we were inspired by the latest SimCity game and how they visualise data about the performance of your city – in fact, Maxis, the developers behind SimCity reached out to us to tell us how much they like the project!

    There’s something exciting about creating a way to do that for real-world cities rather than fictional ones. The idea of visualising a city in 3D with up-to-date data about that city overlaid is an appealing one. Imagine if you could see census data about your area, education data, health data, crime data, property information, live transport (trains, buses, traffic, etc), you’d be able to learn and understand so much more about the place you live.

    This is really just the beginning – the possibilities are endless.

    Why 3D?

    A common question we get is “Why 3D?” – the short answer, beyond “because it’s a visually interesting way of looking at a city”, is that 3D allows you to do things and analyse data in ways that you can’t do in a 2D map. For example by using 3D you can take height and depth into consideration, so you can better visualise the sheer volume of stuff that lies above and below you in a city – things like pipes and underground tunnels, or bridges, overpasses, tall buildings, the weather, and planes! On a 2D map, looking at all of this would be a confusing mess, in 3D you get to see it exactly how it would look in the real world – you can easily see how objects within a city relate to each other.

    Core technology

    At the most basic level ViziCities is built using Three.js, a WebGL library that abstracts all the complexity of 3D rendering in the browser. We use a whole range of other technologies too, like Web Workers, each of which serves a specific purpose or solves a specific problem that we’ve encountered along the way.

    Let’s take a look at some of those now.

    Lessons learnt

    Over the past year we’ve come from knowing practically nothing about 3D rendering and geographic data visualisation, to knowing at least enough about each of them to be dangerous. Along the way we’ve hit many roadblocks and have had to spend a significant amount of time working out what’s wrong and coming up with solutions to get around them.

    The process of problem solving is one I absolutely thrive on, but it’s not for everybody and I hope that the lessons I’m about to share will help you avoid these same problems, allowing you to save time and do more important things with your life.

    These lessons are in no particular order.

    Using a modular, decoupled application architecture pays off in the long run

    We originally started out with hacky, prototypal experiments that were dependency heavy and couldn’t easily be pulled apart and used in other experiments. Although it allowed us to learn how everything worked, it was a mess and caused a world of pain when it came to building out a proper application.

    In the end we re-wrote everything based on a simple approach using the Constructor Pattern and the prototype property. Using this allowed us to separate out logic into decoupled modules, making everything a bit more understandable whilst also allowing us to extend and replace functionality without breaking anything else (we use the Underscore _.extend method to extend objects).

    Here’s an example of our use of the Constructor Pattern.

    To communicate amongst modules we use the Mediator Pattern. This allows us to keep things as decoupled as possible as we can publish events without having to know about who is subscribing to them.

    Here’s an example of our use of the Mediator Pattern:

    /* globals window, _, VIZI */
    (function() {
      "use strict";
     
      // Apply to other objects using _.extend(newObj, VIZI.Mediator);
      VIZI.Mediator = (function() {
        // Storage for topics that can be broadcast or listened to
        var topics = {};
     
        // Subscribe to a topic, supply a callback to be executed
        // when that topic is broadcast to
        var subscribe = function( topic, fn ){
          if ( !topics[topic] ){ 
            topics[topic] = [];
          }
     
          topics[topic].push( { context: this, callback: fn } );
     
          return this;
        };
     
        // Publish/broadcast an event to the rest of the application
        var publish = function( topic ){
          var args;
     
          if ( !topics[topic] ){
            return false;
          } 
     
          args = Array.prototype.slice.call( arguments, 1 );
          for ( var i = 0, l = topics[topic].length; i < l; i++ ) {
     
            var subscription = topics[topic][i];
            subscription.callback.apply( subscription.context, args );
          }
          return this;
        };
     
        return {
          publish: publish,
          subscribe: subscribe
        };
      }());
    }());

    I’d argue that these 2 patterns are the most useful aspects of the new ViziCities application architecture – they have allowed us to iterate quickly without fear of breaking everything.

    Using promises instead of wrestling with callbacks

    Early on in the project I was talking to my friend Hannah Wolfe (Ghost’s CTO) about how annoying callbacks are, particularly when you want to load a bunch of stuff in order. It didn’t take Hannah long to point out how stupid I was being (thanks Hannah) and that I should be using promises instead of wrestling with callbacks. At the time I brushed them off as another hipster fad but in the end she was right (as always) and from that point onwards I used promises wherever possible to take back control of application flow.

    For ViziCities we ended up using the Q library, though there are plenty others to choose from (Hannah uses when.js for Ghost).

    The general usage is the same whatever library you choose – you set up promises and you deal with them at a later date. However, the beauty comes when you want to queue up a bunch of tasks and either handle them in order, or do something when they’re all complete. We use this in a variety of places, most noticeably when loading ViziCities for the first time (also allowing us to output a progress bar).

    I won’t lie, promises take a little while to get your head around but once you do you’ll never look back. I promise (sorry, couldn’t resist).

    Using a consistent build process with basic tests

    I’ve never been one to care too much about process, code quality, testing, or even making sure things are Done Right™. I’m a tinkerer and I much prefer learning and seeing results than spending what feels like wasted time on building a solid process. It turns out my tinkering approach doesn’t work too well for a large Web application which requires consistency and robustness. Who knew?

    The first step for code consistency and quality was to enable strict mode and linting. This meant that the most glaring of errors and inconsistencies were flagged up early on. As an aside, due to our use of the Constructor Pattern we wrapped each module in an anonymous function so we could enable strict mode per module without necessarily enabling it globally.

    At this point it was still a faff to use a manual process for creating new builds (generating a single JavaScript file with all the modules and external dependencies) and to serve the examples. The break-through was adopting a proper build system using Grunt, thanks mostly to a chat I had with Jack Franklin about my woes at an event last year (he subsequently gave me his cold which took 8 weeks to get rid of, but it was worth it).

    Grunt allows us to run a simple command in the terminal to do things like automatically test, concatenate and minify files ready for release. We also use it to serve the local build and auto-refresh examples if they’re open in a browser. You can look at our Grunt setup to see how we set everything up.

    For automated testing we use Mocha, Chai, Sinon.js, Sinon-Chai and PhantomJS. Each of which serves a slightly different purpose in the testing process:

    • Mocha is used for the overall testing framework
    • Chai is used as an assertion library to allows you to write readable tests
    • Sinon.js is used to fake application logic and track behaviour through the testing process
    • PhantomJS is used to run client-side tests in a headless browser from the terminal

    We’ve already put together some (admittedly basic) tests and we plan to improve and increase the test coverage before releasing 0.1.0.

    Travis CI is used to make sure we don’t break anything when pushing changes to GitHub. It automatically performs linting and runs our tests via Grunt when changes are pushed, including pull requests from other contributors (a life saver). Plus it lets you have a cool badge to put on your GitHub readme that shows everyone whether the current version is building successfully.

    Together, these solutions have made ViziCities much more reliable than it has ever been. They also mean that we can move rapidly by building automatically, and they allow us to not have to worry so much about accidentally breaking something. The peace of mind is priceless.

    Monitoring performance to measure improvements

    General performance in frames-per-second can be monitored using FPSMeter. It’s useful for debugging parts of the application that are locking up the browser or preventing the rendering loop from running at a fast pace.

    You can also use the Three.js renderer.info property to monitor what you’re rendering and how it changes over time.

    It’s worth keeping an eye on this to make sure objects are not being rendered when they move out of the current viewport. Early on in ViziCities we had a lot of issues with this not happening and the only way to be sure we to monitor these values.

    Turning geographic coordinates into 2D pixel coordinates using D3.js

    One of the very first problems we encountered was how to turn geographic coordinates (latitude and longitude) into pixel-based coordinates. The math involved to achieve this isn’t simple and it gets even more complicated if you want to consider different geographic projections (trust me, it gets confusing fast).

    Fortunately, the D3.js library has already solved these problems for you, specifically within its geo module. Assuming you’ve included D3.js, you can convert coordinates like so:

    var geoCoords = [-0.01924, 51.50358]; // Central point as [lon, lat]
    var tileSize = 256; // Pixel size of a single map tile
    var zoom = 15; // Zoom level
     
    var projection = d3.geo.mercator()
      .center(geoCoords) // Geographic coordinates of map centre
      .translate([0, 0]) // Pixel coordinates of .center()
      .scale(tileSize << zoom); // Scaling value
     
    // Pixel location of Heathrow Airport to relation to central point (geoCoords)
    var pixelValue = projection([-0.465567112, 51.4718071791]); // Returns [x, y]

    The scale() value is the hardest part of the process to understand. It basically changes the pixel value that’s returned based on how zoomed in you want to be (imagine zooming in on Google Maps). It took me a very long time to understand so I detailed how scale works in the ViziCities source code for others to learn from (and so I can remember!). Once you nail the scaling then you will be in full control of the geographic-to-pixel conversion process.

    Extruding 2D building outlines into 3D objects on-the-fly

    While 2D building outlines are easy to find, turning them into 3D objects turned out to be not quite as easy as we imagined. There’s currently no public dataset containing 3D buildings, which is a shame though it makes it more fun to do it yourself.

    What we ended up using was the THREE.ExtrudeGeometry object, passing in a reference to an array of pixel points (as a THREE.Shape object) representing a 2D building footprint.

    The following is a basic example that would extrude a 2D outline into a 3D object:

    var shape = new THREE.Shape();
    shape.moveTo(0, 0);
    shape.lineTo(10, 0);
    shape.lineTo(10, 10);
    shape.lineTo(0, 10);
    shape.lineTo(0, 0); // Remember to close the shape
     
    var height = 10;
    var extrudeSettings = { amount: height, bevelEnabled: false };
     
    var geom = new THREE.ExtrudeGeometry( shape, extrudeSettings );
    var mesh = new THREE.Mesh(geom);

    What turned out interesting was how it actually turned out quicker to generate 3D objects on-the-fly than to pre-render them and load them in. This was mostly due to the fact it would take longer to download a pre-rendered 3D object than downloading the 2D coordinates string and generating it at runtime.

    Using Web Workers to dramatically increase performance and prevent browser lockup

    One thing we did notice with the generation of 3D objects was that it locked up the browser, particularly when processing a large number of shapes at the same time (you know, like an entire city). To work around this we delved into the magical world of Web Workers.

    Web Workers allow you to run parts of your application in a completely separate processor thread to the browser renderer, meaning anything that happens in the Web Worker thread won’t slow down the browser renderer (ie. it won’t lock up). It’s incredibly powerful but it can also be incredibly complicated to get working as you want it to.

    We ended up using the Catiline.js Web Worker library to abstract some of the complexity and allow us to focus on using Web Workers to our advantage, rather than fighting against them. The result is a Web Worker processing script that’s passed 2D coordinate arrays and returns generated 3D objects.

    After getting this working we noticed that while the majority of browser lock-ups were eliminated, there were two new lock-ups introduced. Specifically, there was a lock-up when the 2D coordinates were passed into the Web Worker scripts, and another lock-up when the 3D objects were returned back to the main application.

    The solution to this problem came from the inspiring mind of Vladimir Agafonkin (of LeafletJS fame). He helped me understand that to avoid the latter of the lock-ups (passing the 3D objects back to the application) I needed to use transferrable objects), namely ArrayBuffer objects. Doing this allows you to effectively transfer ownership of objects created within a Web Worker thread to the main application thread, rather than copying them. We implemented this to great effect, eliminating the second lock-up entirely.

    To eliminate the first lock-up (passing 2D coordinates into the Web Worker) we need to take a different approach. The problem lies again with the copying of the data, though in this case you can’t use transferrable objects. The solution instead lies in loading the data into the Web Worker script using the importScripts method. Unfortunately, I’ve not yet worked out a way to do this with dynamic data sourced from XHR requests. Still, this is definitely a solution that would work.

    Using simplify.js to reduce the complexity of 2D shapes before rendering

    Something we found early on was that complex 2D shapes caused a lot of strain when rendered as 3D objects en-masse. To get around this we use Vladimir Agafonkin’s simplify.js library to reduce the quality of 2D shapes before rendering.

    It’s a great little tool that allows you to keep the general shape while dramatically reducing the number of points used, thus reducing its complexity and render cost. By using this method we could render many more objects with little to no change in how the objects look.

    Getting accurate heights for buildings is really difficult

    One problem we never imagined encountering was getting accurate height information for buildings within a city. While the data does exist, it’s usually unfathomably expensive or requires you to be in education to get discounted access.

    The approach we went for uses accurate height data from OpenStreetMap (if available), falling back to a best-guess that uses the building type combined with 2D footprint area. In most cases this will give a far more accurate guess at the height than simply going for a random height (which is how we originally did it).

    Restricting camera movement to control performance

    The original dream with ViziCities was to visualise an entire city in one go, flying around, looking down on the city from the clouds like some kind of God. We fast learnt that this came at a price, a performance price, and a data-size price. Neither of which we were able to afford.

    When we realised this wasn’t going to be possible we looked at how to approach things from a different angle. How can you feel like you’re looking at an entire city without rendering an entire city? The solution was deceptively simple.

    By restricting camera movement to only view a small area at a time (limiting zoom and rotation) you’re able to have much more control over how many objects can possibly be in view at one time. For example, if you prevent someone from being able to tilt a camera to look at the horizon then you’ll never need to render every single object between the camera and the edge of your scene.

    This simple approach means that you can go absolutely anywhere in the world within ViziCities, whether a thriving metropolis or a country-side retreat, and not have to change the way you handle performance. Every situation is predictable and therefore optimisable.

    Tile-based batching of objects to improve loading and rendering performance

    Another approach we took to improve performance was by splitting the entire world into a grid system, exactly like how Google and other map providers do things. This allows you to load data in small chunks that eventually build up to a complete image.

    In the case of ViziCities, we use the tiles to only request JSON data for the geographic area visible to you. This means that you can start outputting 3D objects as each tile loads rather than waiting for everything to load.

    A by-product of this approach is that you get to benefit from frustum culling, which is when objects not within your view are not rendered, dramatically increasing performance.

    Caching of loaded data to save on time and resources when viewing the same location

    Coupled with the tile-based loading is a caching system that means that you don’t request the same data twice, instead pulling the data from a local store. This saves bandwidth but also saves time as it can take a while to download each JSON tile.

    We currently use a dumb local approach that resets the cache on each refresh, but we plan to implement something like localForage to have the cache persist between browser sessions.

    Using the OpenStreetMap Overpass API rather than rolling your own PostGIS database

    Late into the development of ViziCities we realised that it was unfeasible to continue using our own PostGIS database to store and manipulate geographic data. For one, it would require a huge server just to store the entirety of OpenStreetMap in a database, but really it was just a pain to set up and manage and an external approach was required.

    The solution came in the shape of the Overpass API, an external JSON and XML endpoint to OpenStreetMap data. Overpass allows you to send a request for specific OpenStreetMap tags within a bounding box (in our case, a map tile):

    http://overpass-api.de/api/interpreter?data=[out:json];((way(51.50874,-0.02197,51.51558,-0.01099)[%22building%22]);(._;node(w);););out;

    And get back a lovely JSON response:

    {
      "version": 0.6,
      "generator": "Overpass API",
      "osm3s": {
        "timestamp_osm_base": "2014-03-02T22:08:02Z",
        "copyright": "The data included in this document is from www.openstreetmap.org. The data is made available under ODbL."
      },
      "elements": [
     
    {
      "type": "node",
      "id": 262890340,
      "lat": 51.5118466,
      "lon": -0.0205134
    },
    {
      "type": "node",
      "id": 278157418,
      "lat": 51.5143963,
      "lon": -0.0144833
    },
    ...
    {
      "type": "way",
      "id": 50258319,
      "nodes": [
        638736123,
        638736125,
        638736127,
        638736129,
        638736123
      ],
      "tags": {
        "building": "yes",
        "leisure": "sports_centre",
        "name": "The Workhouse"
      }
    },
    {
      "type": "way",
      "id": 50258326,
      "nodes": [
        638736168,
        638736170,
        638736171,
        638736172,
        638736168
      ],
      "tags": {
        "building": "yes",
        "name": "Poplar Playcentre"
      }
    },
    ...
      ]
    }

    The by-product of this was that you get worldwide support out of the box and benefit from minutely OpenSteetMap updates. Seriously, if you edit or add something to OpenStreetMap (please do) it will show up in ViziCities within minutes.

    Limiting the number of concurrent XHR requests

    Something we learnt very recently was that spamming the Overpass API endpoint with a tonne of XHR requests at the same time wasn’t particularly good for us nor for Overpass. It generally caused delays as Overpass rate-limited us so data took a long time to make its way back to the browser. The great thing was that by already using promises to manage the XHR requests we were half-way ready to solve the problem.

    The final piece of the puzzle is to use throat.js to limit the number of concurrent XHR requests so we can take control and load resources without abusing external APIs. It’s beautifully simple and worked perfectly. No more loading delays!

    Using ViziCities in your own project

    I hope that these lessons and tips have helped in some way, and I hope that it encourages you to try out ViziCities for yourself. Getting set up is easy and well documented, just head to the ViziCities GitHub repo and you’ll find everything you need.

    Contributing to ViziCities

    Part of the reason why we opened up ViziCities was to encourage other people to help build it and make it even more awesome than Peter and I could ever make it. Since launch, we’ve had over 1,000 people favourite the project on GitHub, as well as nearly 100 forks. More importantly, we’ve had 9 Pull Requests from members of the community who we didn’t previously know and who we’ve not asked to help. It’s such an amazing feeling to see people help out like that.

    If we were to pick a favourite contribution so far, it would be adding the ability to load anywhere in the world by putting coordinates in the URL. Such a cool feature and one that has made the project much more usable for everyone.

    We’d love to have more people contribute, whether dealing with issues or playing with the visual styling. Read more about how to contribute and give it a go!

    What’s next?

    It’s been a crazy year and an even crazier fortnight since we launched the project. We never imagined it would excite people in the way it has, it’s blown us away.

    The next steps are slowly going through the issues and getting ready for the 0.1.0 release, which will still be alpha quality but will be sort of stable. Aside from that we’ll continue experimenting with exciting new technologies like the Oculus Rift (yes, that’s me with one strapped to my face)…

    Visualising realtime air traffic in 3D…

    And much, much more. Watch this space.

  3. Custom Elements for Custom Applications – Web Components with Mozilla’s Brick and X-Tag

    In this article, we will explore the use of Mozilla’s Brick and X-Tag libraries. First we’ll use Brick to rapidly prototype a simple application. Then, we’ll build a custom web component using X-Tag.

    The Technology

    Brick: Curated Web Components

    Brick is a set of modular, reusable UI components. The components are designed for adaptive, responsive applications and are a great choice for going mobile first, which is by and large how web-based applications should be developed. This philosophy and its design patterns accomodate a wide range of devices. Brick components aren’t just for mobile apps, they are for modern apps.

    Brick is kind of like a library, but really you should think of it as a curated collection of web components.

    Brick’s collection of components are used declaratively in your HTML <like-this> and can be styled with CSS like regular non-custom elements. Brick components have their own micro-APIs for interacting with them. These components are building blocks. Do you like Lego? Good. You will like Brick.

    Brick web components are made using the X-Tag custom elements polyfill.

    What is X-Tag?

    X-Tag is a library that polyfills several (and soon all) features that enable web components in the browser. In particular, X-Tag is focused on polyfilling the creation of Custom Elements so that you can extend the DOM using your own declarative syntax and element-specific API.

    When you are using components from Brick, you are using web components made using the X-Tag library. Brick includes X-Tag core, so if you include Brick and then decide to make your own custom elements — you do not need to include X-Tag to do so, all the features of X-Tag are already available for you.

    Download Demo Project Files

    Download the demo project files. First we’ll use the material in the simple-app-with-bricks folder.

    Using Bricks in an app

    Here we will build a simple skeleton app using <x-appbar>, <x-deck>, and <x-card>. x-appbar provides a tidy header bar for our application, and x-cards placed as children of an x-deck give us multiple views with transitions.

    First, we start with a barebones HTML document and then include Brick’s CSS and JS, along with our own application-specific code (app.css and app.js respectively in the example to follow).

    <!DOCTYPE html>
    <html>
      <head>
        <meta charset="utf-8">
        <meta name="viewport" content="width=device-width">
        <link rel="stylesheet" type="text/css" href="css/brick.min.css">
        <link rel="stylesheet" type="text/css" href="css/app.css">
        <title>Simple - Brick Demo</title> 
      </head> 
      <body> 
        <!--- Some Brick components will go here -->
        <script type="text/javascript" src="js/brick.min.js"></script>
        <script type="text/javascript" src="js/app.js"></script>
      </body>
      </html>

    Now we add some Brick elements:

    <x-appbar id="bar">
      <header>Simple Brick App</header>
      <button id="view-prev">Previous View</button>
      <button id="view-next">Next View</button>
    </x-appbar>

    Below your x-appbar, create an x-deck with some x-cards as children. You can give the x-cards any content you like.

    <!-- Place your x-deck directly after your x-appbar -->
    <x-deck id="views">
      <x-card>
        <h1>View 1</h1>
        <p>Hello, world!</p>
      </x-card>
      <x-card>
        <h1>Pick a Date</h1>
        <p>&lt;x-datepicker&gt;s are a polyfill for &lt;input type="date"&gt;</p>
        <x-datepicker></x-datepicker>
        <p>Just here to show you another tag in action!</p>
      </x-card>
      <x-card>
        <h1>A Random Cat To Spice Things Up</h1>
        <!-- Fetches from the Lorem Pixel placeholder content service -->
        <img src="http://lorempixel.com/300/300/cats">
      </x-card>
    </x-deck>

    We have already almost completed the structure for a simple application. All we need now is a little bit of CSS and Javascript to tie it together.

    document.addEventListener('DOMComponentsLoaded', function() { 
      // Run any code here that depends on Brick components being loaded first 
      // Very similar to jQuery's document.ready() 
     
      // Grab the x-deck and the two buttons found in our x-appbar and assign to local vars
      var deck = document.getElementById("views"),
      nextButton = document.getElementById("view-next"),
      prevButton = document.getElementById("view-prev"); 
     
      // Add event listeners so that when we click the buttons, our views transition between one another
      prevButton.addEventListener("click", function(){ deck.previousCard(); }); 
      nextButton.addEventListener("click", function(){ deck.nextCard(); });
    });

    <x-calendar> example in our demo application

    Some simple CSS to style our fledgling application:

    html, body {
      margin: 0;
      padding: 0;
      font-family: sans-serif;
      height: 100%;
    }
     
    h1 {
      font-size: 100%;
    }
     
    x-deck > x-card {
      background: #eee;
      padding: 0.6em
    }

    Ka-bam! With a little bit of declarative markup and a few tweaks, we have a skeleton that anyone can use to make a multi-view app within a single HTML document. If you check out the markup in the developer tools, you’ll see the Brick custom elements living happily alongside vanilla HTML elements – you can use the developer tools to inspect and manipulate them in the same way as regular old HTML.

    Custom elements alongside HTML elements in the Firefox Developer Tools

    Now let’s learn how to make our own custom element using X-Tag.

    Creating Custom Elements (Your Own ‘Bricks’) Using X-Tag

    Let’s say we have a mobile application in which the user takes an action that results in a blocking task. Maybe the application is waiting for an external service. The program’s next instruction to complete the user-initiated task depends on the data from the server, so unfortunately we have to wait. For the sake of our purposes here, let’s pretend we can’t modify our program too much and assume an entrenched architecture – maybe we can’t do much else other than communicate to the user until we find a way to deal with the blocking better. We have to do the best with what we have.

    We will create a custom modal spinner that will tell the user to wait. It’s important to give your users feedback on what’s happening in your app when they don’t get to complete their task in a timely manner. A frustrated or confused user might give up on using your app.

    You will want to switch to the x-status-hud folder inside of the demo materials now.

    1. Registering Your Custom Element

    X-Tag relies on several different events to detect and upgrade elements to custom elements. X-Tag will work whether the element was present in the original source document, added by setting the innerHTML property, or created dynamically via document.createElement. You should take a look at the Helpers section of the X-Tag documentation as it covers various functions that will allow you to work with your custom elements just like vanilla ones.

    The first thing that we need to do is register our custom element with X-Tag so that X-Tag knows what to do if and when it encounters our custom element. We do that by calling xtag.register.

    xtag.register('x-status-hud', {
      // We will give our tag custom behavior here for our status indicating spinner
    });

    !IMPORTANT: All custom element names must contain a hyphen. Why is this? The idea here is that there are no standard HTML elements with a hyphen in them, so we don’t trample existing namespaces and have collisions. You do not have to prefix with ‘x-’, this is just a convention used for components create with X-Tag in the Brick ecosystem. Once upon a time in the early days of the W3C specification for custom elements, it was speculated that all custom elements would have an x- prefix; this restriction was relaxed in later versions of the specification. If you were to name an element ‘bacon-eggs’ or ‘adorable-kitten’, both of these would be perfectly valid names. Choose a name that describes what your element is or does.

    If we wanted to, we could choose to set what HTML element is being used as our base element before upgrading. We can also set a specific prototype for our element if we want to involve functionality from a different element. You can declare these as follows:

    xtag.register('x-superinput', {
      extends: 'input',
      prototype: Object.create(HTMLInputElement.prototype)
    });

    The element we are building doesn’t require these properties to be set explicitly. They are worth mentioning because they will be useful to you when you write more advanced components and want a specific level of control over them.

    2. The Element Lifecycle

    Custom elements have events that fire at certain times during their lifetime. Events are fired when an element is created, inserted into the DOM, removed from the DOM, and when attributes are set. You can take advantage of none or all of these events.

    lifecycle:{
      created: function(){
        // fired once at the time a component
        // is initially created or parsed
      },
      inserted: function(){
        // fired each time a component
        // is inserted into the DOM
      },
      removed: function(){
        // fired each time an element
        // is removed from DOM
      },
      attributeChanged: function(){
        // fired when attributes are set
      }

    Our element is going to use the created event. When this event fires, our code will add some child elements.

    xtag.register('x-status-hud', {
      lifecycle: {
        created: function(){
            this.xtag.textEl = document.createElement('strong');
     
            this.xtag.spinnerContainer = document.createElement('div');
            this.xtag.spinner = document.createElement('div');
     
            this.xtag.spinnerContainer.className = 'spinner';
     
            this.xtag.spinnerContainer.appendChild(this.xtag.spinner);
            this.appendChild(this.xtag.spinnerContainer);
            this.appendChild(this.xtag.textEl);
        }
      }
      // More configuration of our element will follow here
    });

    3. Adding Custom Methods

    We need to have control over when we show or hide our status HUD. To do that, we need to add some methods to our component. Let’s add some functions to do that. A simple toggle() may suffice for some use cases, but let’s throw in individual hide() and show() functions too.

     
    xtag.register('x-status-hud', {
      lifecycle: {
        created: function(){
          this.xtag.textEl = document.createElement('strong');
     
          this.xtag.spinnerContainer = document.createElement('div');
          this.xtag.spinner = document.createElement('div');
     
          this.xtag.spinnerContainer.className = 'spinner';
     
          this.xtag.spinnerContainer.appendChild(this.xtag.spinner);
          this.appendChild(this.xtag.spinnerContainer);
          this.appendChild(this.xtag.textEl);
        }
      },
     
      methods: {
        toggle: function(){
          this.visible = this.visible ? false : true;
        },
     
        show: function (){
          this.visible = true;
        },
     
        hide: function (){
          this.visible = false;
        }
      }

    4. Adding Custom Accessors

    Something important to note about properties on custom elements: they do not have to map to an attribute. This is by design because some setters could be very complex and not have a sensible attribute equivalent.

    If you would like an attribute and property to be linked, pass in an empty object literal to the attribute. You’ll see below where this has been done for the label attribute.

    xtag.register('x-status-hud', {
      lifecycle: {
        created: function(){
          this.xtag.textEl = document.createElement('strong');
     
          this.xtag.spinnerContainer = document.createElement('div');
          this.xtag.spinner = document.createElement('div');
     
          this.xtag.spinnerContainer.className = 'spinner';
     
          this.xtag.spinnerContainer.appendChild(this.xtag.spinner);
          this.appendChild(this.xtag.spinnerContainer);
          this.appendChild(this.xtag.textEl);
        }
      },
     
      methods: {
        toggle: function(){
          this.visible = this.visible ? false : true;
        },
     
        show: function (){
          this.visible = true;
        },
     
        hide: function (){
          this.visible = false;
        }
      },
     
      accessors: {
        visible: {
          attribute: { boolean: true }
        },
     
        label: {
          attribute: {},
     
          set: function(text) {
            this.xtag.textEl.innerHTML = text;
          }
        }
      }
    }); // End tag declaration

    If the difference between attributes and properties is unclear to you, take a look at the top answer to this Stack Overflow question. Although the question being asked is about something else entirely (jQuery), the top answer has a great explanation that will help you understand the relationship between attributes and properties.

    The Finished Component

    When we write code that depends on custom elements having been loaded already, we add an event listener that fires when the components have finished loading. This is sort of like jQuery’s document.ready.

    <script type="text/javascript">
    document.addEventListener('DOMComponentsLoaded', function(){
      // Any HUD customizations should be done here.
      // We just pop up the HUD here to show you it works!
      var testHUD = document.getElementById("test");
      testHUD.label = "Please Wait...";
      testHUD.show();
    }, false);
    </script>

    X-Tag Status HUD Demo

    There you have it. We’ve created a simple modular, reusable widget for our client-side code. It’s a good starting point. But is it really finished?

    Some ways we may improve this element:

    • Have the element recalculate it’s size when the attributeChanged event is fired and have the component resize to fit the label as it is updated rather than truncate the label with an ellipsis
    • Let the developer set an image, such as an animated GIF, in place of the CSS spinner to customize the user experience further
    • Have a progress bar instead of a spinner to give the user some additional information about task progress

    Use your creativity to come up with a small set of practical features and improvements beyond these as an exercise on your own.

    After following this tutorial you should understand how to extend the DOM with your own custom elements. If you’re stuck, leave a comment and we’ll do our best to get you back on track. If you’re not stuck, post your GitHub repo and show us what you’ve made with Brick and X-Tag. Happy hacking!

  4. The Making of the Time Out Firefox OS app

    A rash start into adventure

    So we told our client that yes, of course, we would do their Firefox OS app. We didn’t know much about FFOS at the time. But, hey, we had just completed refactoring their native iOS and Android apps. Web applications were our core business all along. So what was to be feared?

    More than we thought, it turned out. Some of the dragons along the way we fought and defeated ourselves. At times we feared that we wouldn’t be able to rescue the princess in time (i.e. before MWC 2013). But whenever we got really lost in detail forest, the brave knights from Mozilla came to our rescue. In the end, it all turned out well and the team lived happily ever after.

    But here’s the full story:

    Mission & challenge

    Just like their iOS and Android apps, Time Out‘s new Firefox OS app was supposed to allow browsing their rich content on bars, restaurants, things to do and more by category, area, proximity or keyword search, patient zero being Barcelona. We would need to show results as illustrated lists as well as visually on a map and have a decent detail view, complete with ratings, access details, phone button and social tools.

    But most importantly, and in addition to what the native apps did, this app was supposed to do all of that even when offline.

    Oh, and there needed to be a presentable, working prototype in four weeks time.

    Cross-platform reusability of the code as a mobile website or as the base of HTML5 apps on other mobile platforms was clearly prio 2 but still to be kept in mind.

    The princess was clearly in danger. So we arrested everyone on the floor that could possibly be of help and locked them into a room to get the basics sorted out. It quickly emerged that the main architectural challenges were that

    • we had a lot of things to store on the phone, including the app itself, a full street-level map of Barcelona, and Time Out’s information on every venue in town (text, images, position & meta info),
    • at least some of this would need to be loaded from within the app; once initially and synchronizable later,
    • the app would need to remain interactively usable during these potentially lengthy downloads, so they’d need to be asynchronous,
    • whenever the browser location changed, this would be interrupted

    In effect, all the different functionalities would have to live within one single HTML document.

    One document plus hash tags

    For dynamically rendering, changing and moving content around as required in a one-page-does-all scenario, JavaScript alone didn’t seem like a wise choice. We’d been warned that Firefox OS was going to roll out on a mix of devices including the very low cost class, so it was clear that fancy transitions of entire full-screen contents couldn’t be orchestrated through JS loops if they were to happen smoothly.

    On the plus side, there was no need for JS-based presentation mechanics. With Firefox OS not bringing any graveyard of half-dead legacy versions to cater to, we could (finally!) rely on HTML5 and CSS3 alone and without fallbacks. Even beyond FFOS, the quick update cycles in the mobile environment didn’t seem to block the path for taking a pure CSS3 approach further to more platforms later.

    That much being clear, which better place to look for best practice examples than Mozilla Hacks? After some digging, Thomas found Hacking Firefox OS in which Luca Greco describes the use of fragment identifiers (aka hashtags) appended to the URL to switch and transition content via CSS alone, which we happily adopted.

    Another valuable source of ideas was a list of GAIA building blocks on Mozilla’s website, which has since been replaced by the even more useful Building Firefox OS site.

    In effect, we ended up thinking in terms of screens. Each physically a <div>, whose visibility and transitions are governed by :target CSS selectors that draw on the browser location’s hashtag. Luckily, there’s also the onHashChange event that we could additionally listen to in order to handle the app-level aspects of such screen changes in JavaScript.

    Our main HTML and CSS structure hence looked like this:

    And a menu

    We modeled the drawer menu very similarily, just that it sits in a <nav> element on the same level as the <section> container holding all the screens. Its activation and deactivation works by catching the menu icon clicks, then actively changing the screen container’s data-state attribute from JS, which triggers the corresponding CSS3 slide-in / slide-out transition (of the screen container, revealing the menu beneath).

    This served as our “Hello, World!” test for CSS3-based UI performance on low-end devices, plus as a test case for combining presentation-level CSS3 automation with app-level explicit status handling. We took down a “yes” for both.

    UI

    By the time we had put together a dummy around these concepts, the first design mockups from Time Out came in so that we could start to implement the front end and think about connecting it to the data sources.

    For presentation, we tried hard to keep the HTML and CSS to the absolute minimum. Mozilla’s GAIA examples being a very valuable source of ideas once more.

    Again, targeting Firefox OS alone allowed us to break free of the backwards compatibility hell that we were still living in, desktop-wise. No one would ask us Will it display well in IE8? or worse things. We could finally use real <section>, <nav>, <header>, and <menu> tags instead of an army of different classes of <div>. What a relief!

    The clear, rectangular, flat and minimalistic design we got from Time Out also did its part to keep the UI HTML simple and clean. After we were done with creating and styling the UI for 15 screens, our HTML had only ~250 lines. We later improved that to 150 while extending the functionality, but that’s a different story.

    Speaking of styling, not everything that had looked good on desktop Firefox even in its responsive design view displayed equally well on actual mobile devices. Some things that we fought with and won:

    Scale: The app looked quite different when viewed on the reference device (a TurkCell branded ZTE device that Mozilla had sent us for testing) and on our brand new Nexus 4s:

    After a lot of experimenting, tearing some hair and looking around how others had addressed graceful, proportional scaling for a consistent look & feel across resolutions, we stumbled upon this magic incantation:

    <meta name="viewport" content="user-scalable=no, initial-scale=1,
    maximum-scale=1, width=device-width" />

    What it does, to quote an article at Opera, is to tell the browser that there is “No scaling needed, thank you very much. Just make the viewport as many pixels wide as the device screen width”. It also prevents accidental scaling while the map is zoomed. There is more information on the topic at MDN.

    Then there are things that necessarily get pixelated when scaled up to high resolutions, such as the API based venue images. Not a lot we could do about that. But we could at least make the icons and logo in the app’s chrome look nice in any resolution by transforming them to SVG.

    Another issue on mobile devices was that users have to touch the content in order to scroll it, so we wanted to prevent the automatic highlighting that comes with that:

    li, a, span, button, div
    {
        outline:none;
        -moz-tap-highlight-color: transparent;
        -moz-user-select: none;
        -moz-user-focus:ignore
    }

    We’ve since been warned that suppressing the default highlighting can be an issue in terms of accessibility, so you might wanted to consider this carefully.

    Connecting to the live data sources

    So now we had the app’s presentational base structure and the UI HTML / CSS in place. It all looked nice with dummy data, but it was still dead.

    Trouble with bringing it to life was that Time Out was in the middle of a big project to replace its legacy API with a modern Graffiti based service and thus had little bandwidth for catering to our project’s specific needs. The new scheme was still prototypical and quickly evolving, so we couldn’t build against it.

    The legacy construct already comprised a proxy that wrapped the raw API into something more suitable for consumption by their iOS and Android apps, but after close examination we found that we better re-re-wrap that on the fly in PHP for a couple of purposes:

    • Adding CORS support to avoid XSS issues, with the API and the app living in different subdomains of timeout.com,
    • stripping API output down to what the FFOS app really needed, which we could see would reduce bandwidth and increase speed by magnitude,
    • laying the foundation for harvesting of API based data for offline use, which we already knew we’d need to do later

    As an alternative to server-side CORS support, one could also think of using the SystemXHR API. It is a mighty and potentially dangerous tool however. We also wanted to avoid any needless dependency on FFOS-only APIs.

    So while the approach wasn’t exactly future proof, it helped us a lot to get to results quickly, because the endpoints that the app was calling were entirely of our own choice and making, so that we could adapt them as needed without time loss in communication.

    Populating content elements

    For all things dynamic and API-driven, we used the same approach at making it visible in the app:

    • Have a simple, minimalistic, empty, hidden, singleton HTML template,
    • clone that template (N-fold for repeated elements),
    • ID and fill the clone(s) with API based content.
    • For super simple elements, such as <li>s, save the cloning and whip up the HTML on the fly while filling.

    As an example, let’s consider the filters for finding venues. Cuisine is a suitable filter for restaurants, but certainly not for museums. Same is true for filter values. There are vegetarian restaurants in Barcelona, but certainly no vegetarian bars. So the filter names and lists of possible values need to be asked of the API after the venue type is selected.

    In the UI, the collapsible category filter for bars & pubs looks like this:

    The template for one filter is a direct child of the one and only

    <div id="templateContainer">

    which serves as our central template repository for everything cloned and filled at runtime and whose only interesting property is being invisible. Inside it, the template for search filters is:

    <div id="filterBoxTemplate">
      <span></span>
      <ul></ul>
    </div>

    So for each filter that we get for any given category, all we had to do was to clone, label, and then fill this template:

    $('#filterBoxTemplate').clone().attr('id', filterItem.id).appendTo(
    '#categoryResultScreen .filter-container');
    ...
    $("#" + filterItem.id).children('.filter-button').html(
    filterItem.name);

    As you certainly guessed, we then had to to call the API once again for each filter in order to learn about its possible values, which were then rendered into <li> elements within the filter‘s <ul> on the fly:

    $("#" + filterId).children('.filter_options').html(
    '<li><span>Loading ...</span></li>');
    
    apiClient.call(filterItem.api_method, function (filterOptions)
    {
      ...
      $.each(filterOptions, function(key, option)
      {
        var entry = $('<li filterId="' + option.id + '"><span>'
          + option.name + '</span></li>');
    
        if (selectedOptionId && selectedOptionId == filterOptionId)
        {
          entry.addClass('filter-selected');
        }
    
        $("#" + filterId).children('.filter_options').append(entry);
      });
    ...
    });

    DOM based caching

    To save bandwidth and increase responsiveness in on-line use, we took this simple approach a little further and consciously stored more application level information in the DOM than needed for the current display if that information was likely needed in the next step. This way, we’d have easy and quick local access to it without calling – and waiting for – the API again.

    The technical way we did so was a funny hack. Let’s look at the transition from the search result list to the venue detail view to illustrate:

    As for the filters above, the screen class for the detailView has an init() method that populates the DOM structure based on API input as encapsulated on the application level. The trick now is, while rendering the search result list, to register anonymous click handlers for each of its rows, which – JavaScript passing magic – contain a copy of, rather than a reference to, the venue objects used to render the rows themselves:

    renderItems: function (itemArray)
    {
      ...
    
      $.each(itemArray, function(key, itemData)
      {        
        var item = screen.dom.resultRowTemplate.clone().attr('id', 
          itemData.uid).addClass('venueinfo').click(function()
        {
          $('#mapScreen').hide();
          screen.showDetails(itemData);
        });
    
        $('.result-name', item).text(itemData.name);
        $('.result-type-label', item).text(itemData.section);
        $('.result-type', item).text(itemData.subSection);
    
        ...
    
        listContainer.append(item);
      });
    },
    
    ...
    
    showDetails: function (venue)
    {
      require(['screen/detailView'], function (detailView)
      {
        detailView.init(venue);
      });
    },

    In effect, there’s a copy of the data for rendering each venue’s detail view stored in the DOM. But neither in hidden elements nor in custom attributes of the node object, but rather conveniently in each of the anonymous pass-by-value-based click event handlers for the result list rows, with the added benefit that they don’t need to be explicitly read again but actively feed themselves into the venue details screen as soon a row receives a touch event.

    And dummy feeds

    Finishing the app before MWC 2013 was pretty much a race against time, both for us and for Time Out’s API folks, who had an entirely different and equally – if not more so – sportive thing to do. Therefore they had very limited time for adding to the (legacy) API that we were building against. For one data feed, this meant that we had to resort to including static JSON files into the app’s manifest and distribution; then use relative, self-referencing URLs as fake API endpoints. The illustrated list of top venues on the app’s main screen was driven this way.

    Not exactly nice, but much better than throwing static content into the HTML! Also, it kept the display code already fit for switching to the dynamic data source that eventually materialized later, and compatible with our offline data caching strategy.

    As the lack of live data on top venues then extended right to their teaser images, we made the latter physically part of the JSON dummy feed. In Base64 :) But even the low-end reference device did a graceful job of handling this huge load of ASCII garbage.

    State preservation

    We had a whopping 5M of local storage to spam, and different plans already (as well as much higher needs) for storing the map and application data for offline use. So what to do with this liberal and easily accessed storage location? We thought we could at least preserve the current application state here, so you’d find the app exactly as you left it when you returned to it.

    Map

    A city guide is the very showcase of an app that’s not only geo aware but geo centric. Maps fit for quick rendering and interaction in both online and offline use were naturally a paramount requirement.

    After looking around what was available, we decided to go with Leaflet, a free, easy to integrate, mobile friendly JavaScript library. It proved to be really flexible with respect to both behaviour and map sources.

    With its support for pinching, panning and graceful touch handling plus a clean and easy API, Leaflet made us arrive at a well-usable, decent-looking map with moderate effort and little pain:

    For a different project, we later rendered the OSM vector data for most of Europe into terabytes of PNG tiles in cloud storage using on-demand cloud power. Which we’d recommend as an approach if there’s a good reason not to rely on 3rd party hosted apps, as long as you don’t try this at home; Moving the tiles may well be slower and more costly than their generation.

    But as time was tight before the initial release of this app, we just – legally and cautiously(!) – scraped ready-to use OSM tiles off MapQuest.com.

    The packaging of the tiles for offline use was rather easy for Barcelona because about 1000 map tiles are sufficient to cover the whole city area up to street level (zoom level 16). So we could add each tile as a single line into the manifest.appache file. The resulting, fully automatic, browser-based download on first use was only 10M.

    This left us with a lot of lines like

    /mobile/maps/barcelona/15/16575/12234.png
    /mobile/maps/barcelona/15/16575/12235.png
    ...

    in the manifest and wishing for a $GENERATE clause as for DNS zone files.

    As convenient as it may seem to throw all your offline dependencies’ locations into a single file and just expect them to be available as a consequence, there are significant drawbacks to this approach. The article Application Cache is a Douchebag by Jake Archibald summarizes them and some help is given at Html5Rocks by Eric Bidleman.

    We found at the time that the degree of control over the current download state, and the process of resuming the app cache load in case that the initial time users spent in our app didn’t suffice for that to complete was rather tiresome.

    For Barcelona, we resorted to marking the cache state as dirty in Local Storage and clearing that flag only after we received the updateready event of the window.applicationCache object but in the later generalization to more cities, we moved the map away from the app cache altogether.

    Offline storage

    The first step towards offline-readiness was obviously to know if the device was online or offline, so we’d be able to switch the data source between live and local.

    This sounds easier than it was. Even with cross-platform considerations aside, neither the online state property (window.navigator.onLine), the events fired on the <body> element for state changes (“online” and “offline”, again on the <body>), nor the navigator.connection object that was supposed to have the on/offline state plus bandwidth and more, really turned out reliable enough.

    Standardization is still ongoing around all of the above, and some implementations are labeled as experimental for a good reason :)

    We ultimately ended up writing a NetworkStateService class that uses all of the above as hints, but ultimately and very pragmatically convinces itself with regular HEAD requests to a known live URL that no event went missing and the state is correct.

    That settled, we still needed to make the app work in offline mode. In terms of storage opportunities, we were looking at:

    Storage Capacity Updates Access Typical use
    App / app cache, i.e. everything listed in the file that the value of appcache_path in the app‘s webapp.manifest points to, and which is and therefore downloaded onto the device when the app is installed. <= 50M. On other platforms (e.g. iOS/Safari), user interaction required from 10M+. Recommendation from Moziila was to stay <2M. Hard. Requires user interaction / consent, and only wholesale update of entire app possible. By (relative) path HTML, JS, CSS, static assets such as UI icons
    LocalStorage 5M on UTF8-platforms such as FFOS, 2.5M in UTF16, e.g. on Chrome. Details here Anytime from app By name Key-value storage of app status, user input, or entire data of modest apps
    Device Storage (often SD card) Limited only by hardware Anytime from app (unless mounted as UDB drive when cionnected to desktop computer) By path, through Device Storage API Big things
    FileSystem API Bad idea
    Database Unlimited on FFOS. Mileage on other platforms varies Anytime from app Quick and by arbitrary properties Databases :)

    Some aspects of where to store the data for offline operation were decided upon easily, others not so much:

    • the app, i.e. the HTML, JS, CSS, and UI images would go into the app cache
    • state would be maintained in Local Storage
    • map tiles again in the app cache. Which was a rather dumb decision, as we learned later. Barcelona up to zoom level 16 was 10M, but later cities were different. London was >200M and even reduced to max. zoom 15 still worth 61M. So we moved that to Device Storage and added an actively managed download process for later releases.
    • The venue information, i.e. all the names, locations, images, reviews, details, showtimes etc. of the places that Time Out shows in Barcelona. Seeing that we needed lots of space, efficient and arbitrary access plus dynamic updates, this had to to go into the Database. But how?

    The state of affairs across the different mobile HTML5 platforms was confusing at best, with Firefox OS already supporting IndexedDB, but Safari and Chrome (considering earlier versions up to Android 2.x) still relying on a swamp of similar but different sqlite / WebSQL variations.

    So we cried for help and received it, as always when we had reached out to the Mozilla team. This time in the form of a pointer to pouchDB, a JS-based DB layer that at the same time wraps away the different native DB storage engines behind a CouchDB-like interface and adds super easy on-demand synchronization to a remote CouchDB-hosted master DB out there.

    Back last year it still was in pre-alpha state but very usable already. There were some drawbacks, such as the need for adding a shim for WebSql based platforms. Which in turn meant we couldn’t rely on storage being 8 bit clean, so that we had to base64 our binaries, most of all the venue images. Not exactly pouchDB’s fault, but still blowing up the size.

    Harvesting

    The DB platform being chosen, we next had to think how we’d harvest all the venue data from Time Out’s API into the DB. There were a couple of endpoints at our disposal. The most promising for this task was proximity search with no category or other restrictions applied, as we thought it would let us harvest a given city square by square.

    Trouble with distance metrics however being that they produce circles rather than squares. So step 1 of our thinking would miss venues in the corners of our theoretical grid

    while extending the radius to (half the) the grid’s diagonal, would produce redundant hits and necessitate deduplication.

    In the end, we simply searched by proximity to a city center location, paginating through the result indefinitely, so that we could be sure to to encounter every venue, and only once:

    Technically, we built the harvester in PHP as an extension to the CORS-enabled, result-reducing API proxy for live operation that was already in place. It fed the venue information in to the master CouchDB co-hosted there.

    Time left before MWC 2013 getting tight, we didn’t spend much time on a sophisticated data organization and just pushed the venue information into the DB as one table per category, one row per venue, indexed by location.

    This allowed us to support category based and area / proximity based (map and list) browsing. We developed an idea how offline keyword search might be made possible, but it never came to that. So the app simply removes the search icon when it goes offline, and puts it back when it has live connectivity again.

    Overall, the app now

    • supported live operation out of box,
    • checked its synchronization state to the remote master DB on startup,
    • asked, if needed, permission to make the big (initial or update) download,
    • supported all use cases but keyword search when offline.

    The involved components and their interactions are summarized in this diagram:

    Organizing vs. Optimizing the code

    For the development of the app, we maintained the code in a well-structured and extensive source tree, with e.g. each JavaScript class residing in a file of its own. Part of the source tree is shown below:

    This was, however, not ideal for deployment of the app, especially as a hosted Firefox OS app or mobile web site, where download would be the faster, the fewer and smaller files we had.

    Here, Require.js came to our rescue.

    It provides a very elegant way of smart and asynchronous requirement handling (AMD), but more importantly for our purpose, comes with an optimizer that minifies and combines the JS and CSS source into one file each:

    To enable asynchronous dependency management, modules and their requirements must be made known to the AMD API through declarations, essentially of a function that returns the constructor for the class you’re defining.

    Applied to the search result screen of our application, this looks like this:

    define
    (
      // new class being definied
      'screensSearchResultScreen',
    
      // its dependencies
      ['screens/abstractResultScreen', 'app/applicationController'],
    
      // its anonymous constructor
      function (AbstractResultScreen, ApplicationController)
      {
        var SearchResultScreen = $.extend(true, {}, AbstractResultScreen,
        {
          // properties and methods
          dom:
          {
            resultRowTemplate: $('#searchResultRowTemplate'),
            list: $('#search-result-screen-inner-list'),
            ...
          }
          ...
        }
        ...
    
        return SearchResultScreen;
      }
    );

    For executing the optimization step in the build & deployment process, we used Rhino, Mozilla’s Java-based JavaScript engine:

    java -classpath ./lib/js.jar:./lib/compiler.jar   
      org.mozilla.javascript.tools.shell.Main ./lib/r.js -o /tmp/timeout-webapp/
      $1_config.js

    CSS bundling and minification is supported, too, and requires just another call with a different config.

    Outcome

    Four weeks had been a very tight timeline to start with, and we had completely underestimated the intricacies of taking HTML5 to a mobile and offline-enabled context, and wrapping up the result as a Marketplace-ready Firefox OS app.

    Debugging capabilities in Firefox OS, especially on the devices themselves, were still at an early stage (compared to clicking about:app-manager today). So the lights in our Cologne office remained lit until pretty late then.

    Having built the app with a clear separation between functionality and presentation also turned out a wise choice when a week before T0 new mock-ups for most of the front end came in :)

    But it was great and exciting fun, we learned a lot in the process, and ended up with some very useful shiny new tools in our box. Often based on pointers from the super helpful team at Mozilla.

    Truth be told, we had started into the project with mixed expectations as to how close to the native app experience we could get. We came back fully convinced and eager for more.

    In the end, we made the deadline and as a fellow hacker you can probably imagine our relief. The app finally even received its 70 seconds of fame, when Jay Sullivan shortly demoed it at Mozilla’s MWC 2013 press conference as a showcase for HTML5′s and Firefox OS’s offline readiness (Time Out piece at 7:50). We were so proud!

    If you want to play with it, you can find the app in the marketplace or go ahead try it online (no offline mode then).

    Since then, the Time Out Firefox OS app has continued to evolve, and we as a team have used the chance to continue to play with and build apps for FFOS. To some degree, the reusable part of this has become a framework in the meantime, but that’s a story for another day..

    We’d like to thank everyone who helped us along the way, especially Taylor Wescoatt, Sophie Lewis and Dave Cook from Time Out, Desigan Chinniah and Harald Kirschner from Mozilla, who were always there when we needed help, and of course Robert Nyman, who patiently coached us through writing this up.

  5. Applications Open for Expanded Tablet Contribution Program

    Last month, Mozilla announced the Tablet Contribution Program to help deliver Firefox OS to the tablet form factor. Today, we are excited to open the Application for Hardware Support to Mozillians all over the world who will sign up to contribute to Firefox OS coding, testing, localizing, and product planning.

    The first device for this program is the 10″ InFocus tablet from Foxconn, with a 1.0GHz Quad-Core Cortex-A7 processor.

    Foxconn's InFocus F1 New Tab running Firefox OS Brand/Model: Foxconn InFocus
    Processor: A31 (ARM Cortex A7) Quad-Core 1.2GHz w/ PowerVR SGX544MP2 GPU
    RAM: 2GB
    Storage: 16GB
    Screen: 10.1" IPS capacitive multi-touch @ 1280x800
    Camera: Dual cameras, 2MP/5MP
    Wireless: 802.11b/g/n
    Ports: Micro SD, Micro USB, headphone
    Other: GPS/AGPS, Bluetooth, Gyroscope
    Battery: 7000mAh

    Today, we’re also excited to announce an upcoming addition to the program, a 7″ Vixen tablet from VIA, with a 1.2Ghz Dual core Cortex-A9 processor.

    VIA Vixen running Firefox OS Brand/Model: VIA Vixen
    Processor: WM8880 (ARM Cortex A9) Dual-Core 1.2Ghz w/ Dual-Core Mali 400 GPU
    RAM: 1GB
    Storage: 8GB
    Screen: 7" capacitive multi-touch @ 1024x600
    Camera: Dual cameras, 0.3MP/2.0MP
    Wireless: 802.11b/g/n
    Ports: Micro SD, Micro USB, Mini HDMI, headphone
    Other: Bluetooth, Accelerometer
    Battery: 4000mAh

    We have limited quantities of these developer devices so we’re looking for dedicated contributors who can commit to regular testing and reporting of defects, identifying and documenting feature gaps with competitor tablets, triaging incoming bug reports, localizing and translating UI, prioritizing work and building roadmaps, hacking on existing features and bugs, defining new features and experiences, and more.

    If that sounds exciting to you, and you’ve got time and skills to work with Mozilla to make a real difference in the tablet space, apply now for free developer tablet hardware from Mozilla.

  6. Wanted: Awesome HTML5 app ports for Firefox OS & the Open Web

    A bit of background

    In 2013, Mozilla and our partners launched Firefox OS in fourteen markets. We released three Firefox OS smartphone models and a Geeksphone developer preview device. Our Developer Relations team hosted eight invite-only workshops for app developers around the world: Mountain View, London, Madrid, Bogota, Warsaw, Porto Alegre, Guadalajara, and Budapest. This year, Firefox OS will launch on more devices in more countries around the world. We continue to grow Firefox Marketplace with new and engaging HTML5 apps that run on Firefox OS devices.

    Through our Phones for Apps program, we’ve shipped hundreds of Geeksphones to app developers around the world. Developers like you ported existing HTML5 apps to Firefox OS, and got to keep the developer preview devices we sent. Huge thanks to hundreds of these pioneers in scores of countries who delivered apps to Firefox Marketplace!

    What’s new: Phones for Cordova/PhoneGap ports

    This year we’re focused on finding the very best apps — apps that deliver great experiences and local relevance to Firefox OS users. For HTML5 app developers, powerful cross-platform tools make it easier to build apps for native platforms and to access native device APIs. This week we introduced the Firefox OS Cordova 3.4 integration, which makes it possible to release Cordova apps on the Firefox OS platform. While this is an ongoing project, significant functionality is available now with the 3.4 release of Apache Cordova. Yesterday’s post describes how to use these new capabilities to port your existing app or apps.

    If you’ve already built apps with Cordova/PhoneGap this is a unique opportunity to port your apps quickly and easily. We’ve heard from developers who successfully migrated PhoneGap apps to Firefox OS over a weekend — taking hours, not weeks or months. And we know there are great PhoneGap apps out there. For this reason, the third phase of our popular and successful Phones for Apps program will focus exclusively on app “porters” with currently popular and well-rated Cordova/PhoneGap apps. (NOTE: HTML5 applications can be packaged as native apps via the framework and made available for installation. Cordova is the underlying software in the Adobe product PhoneGap.)

    Who should apply

    Works_w_PhoneGap

    If you’re a Cordova/PhoneGap developer with published HTML5 apps on any platform, we invite you to participate in the Phones for Cordova/PhoneGap Apps program. Show us your well-rated listed app and if it’s a fit for Firefox Marketplace, we’ll ship you a developer device and help you through the process of getting your app into the Firefox Marketplace. Here’s how it works:

    • Apply here: Phones for Cordova/PhoneGap Ports application.
    • All submissions will be reviewed as they are received. We’ll let you know if your application is accepted.
    • Once you commit to porting your app, we’ll send you a device.

    Upcoming Firefox OS workshops & hack events

    In addition to our online Phones for Cordova/PhoneGap Ports program, we plan to host a limited number of invitation-only Firefox OS App Workshops in new locales. We’re excited to announce a workshop in the beautiful city of Prague in the Czech Republic, date to be set next quarter. The enrollment form is open and we are accepting applications from qualified HTML5 developers.

    REQUIRED: You must have a Firefox OS app in progress or a published HTML5 app that you’re porting to Firefox OS. Show us a link to your existing app or to working code for the app you’re building. If you don’t provide relevant links to working app code we will not consider your application.

    Apply now for the Prague App Workshop.

    What if you don’t live near Prague in the Czech Republic? Don’t worry! We plan to offer several other app workshops this year and we’ll announce them here.

    hackday_devices

    Whether you’re just getting started or you’ve already built apps for Firefox OS, there are many hackathons, App Days, and other events hosted or sponsored by Mozillians and happening around the world where you can participate and learn. Find out about community-driven Firefox events hosted by Mozilla Reps as well as talks and events that include Mozilla participants across the planet.

  7. Building Cordova apps for Firefox OS

    Update: In addition to the Cordova integration described below, Firefox OS is now supported in the 3.4 release of Adobe PhoneGap.

    If you’re already building apps with PhoneGap, you can quickly and easily port your existing apps to Firefox OS. We think this is so cool that we’ve launched a Phones for PhoneGap Apps program, focused specifically on compelling apps built with PhoneGap and/or Cordova. Got a great PhoneGap app? We’d love to send you a device!

    Cordova is a popular Apache Foundation open source project providing a set of device APIs to allow mobile application developers to access native device functions—such as the camera or the accelerometer—from JavaScript. HTML5 applications can be packaged as native apps via the framework and made available for installation from the app stores of supported platforms, including iOS, Android, Blackberry, Windows Phone—and now Firefox OS. Cordova is also the underlying software in the Adobe product PhoneGap.

    Over the past few months, Mozilla has been working with the Cordova team to integrate Firefox OS into the Cordova framework, making it possible to release Cordova apps on the Firefox OS platform. While this is an ongoing project, significant functionality is available now with the 3.4 release of Cordova. In this post we will describe how to use these new capabilities.

    Creating and building a Firefox OS app in Cordova

    The Cordova site explains how to install the software. Note the installation procedure requires Node.js and can be executed from the command line.

    $ sudo npm install -g cordova

    Once Cordova is installed, an application can be built with the Cordova create command. (Parameters for the command are described in the Cordova documentation linked above.)

    $ cordova create hello com.example.hello HelloWorld

    This will create a directory named hello that contains the project and a basic web app in the hello/www directory. In order to produce a Firefox OS app, the proper platform needs to be added next.

    $ cd hello
    $ cordova platform add firefoxos

    With some of the other supported platforms, you would generally run a build command at this stage to produce the output for the platform. Because Firefox OS is an HTML5-based operating system, no compile step is needed to process and produce the app. The only step required is a prepare statement to package the app.

    $ cordova prepare firefoxos

    Those are the basic steps to generate a simple Firefox OS app from a Cordova project. The output for the project will be located in the hello/platforms/firefoxos/www directory.

    Debugging the app

    With most other Cordova platforms you would use the emulate or run commands to test an app. With Firefox OS you currently need to use the App Manager, which is the primary tool for debugging and interrogating a Firefox OS app. The tool offers many capabilities including JavaScript debugging and live editing of the CSS while connected to a device.

    The above link explains how to install and start the App Manager. Once the App Manager is running, you can click the Add Packaged App button and select the hello/platforms/firefoxos/www directory of your app and press the Open button.

    appmanager

    This will add the basic app to the App Manager. You will notice no icons are present. This is because the framework integration does not provide them currently and only a bare-bones manifest.webapp is created. From here you can update the app or debug it. Note that between updates you must run a cordova prepare firefoxos command as this step packages the app and puts it in the platforms/firefoxos/www directory. Below is a screenshot of the Cordova HelloWorld app being debugged.

    debugger

    The Firefox OS Manifest

    Firefox OS apps are essentially HTML5 applications that are described by a manifest file. This manifest file points to artifacts like icons, start page, etc. that will be used by the application. In addition, the manifest controls privilege levels and device-specific APIs that are needed by the app. The manifest documentation is available on MDN.

    With the default integration in Cordova, a very generic manifest is created and placed in the platforms/firefoxos/www directory. In almost all cases this will not suffice, as you will at least want to provide icons for your app. The App Manager will complain if the app does not contain at least a 128×128 pixel sized icon. This does not prevent you from testing your app, but it is required to upload your app to the Firefox Marketplace. The manifest can be created with a simple text editor or you can modify the manifest in the App Manager. An example manifest.webapp is shown below.

    {
      "name": "My App",
      "description": "My elevator pitch goes here",
      "launch_path": "/",
      "icons": {
        "128": "/img/icon-128.png"
      },
      "developer": {
        "name": "Your name or organization",
        "url": "http://your-homepage-here.org"
      },
      "default_locale": "en"
    }

    Make sure the manifest is created or copied into the project/www folder. Subsequent cordova prepare commands will replace the auto-generated manifest with your application-specific manifest.

    Start Up Code

    When creating a Cordova app, the starter code generated includes index.html, css/index.css, img/logo.png and js/index.js files. The code in index.js is initiated in index.hml like this:

    <script type="text/javascript">
      app.initialize();
    </script>

    The initialize function essentially sets up the event trigger for the onDeviceReady event, which signifies that the Cordova framework is loaded and ready. The generated code is sufficient for Firefox OS unless you want to implement a privileged App. Privileged apps are Marketplace-signed apps that require the use of more sensitive APIs–for example, Contacts API. See the Packaged apps documentation for more information. For privileged Apps, code like this violates CSP restrictions because of the inline script tag. To get around this, remove the inline script and initiate the app using a window.onload event in the js/index.js file.

    Sample App

    To test and debug the Cordova/Firefox OS integration we developed a sample app. This application is available on GitHub. It illustrates use of the device-specific plugins. The images and code snippets in the following sections were taken from the sample app. If you want to check out the code and work with it, first create a Cordova project and check the code into the project/www directory. You can then run cordova prepare firefoxos to package the app. Run and debug as described earlier in this post.

    app

    Device APIs

    Cordova uses a plugin architecture to implement device APIs, such as Accelerometer, Geolocation or Contacts. These APIs are very similar to Firefox OS Web APIs and Web Activities and are documented on the Cordova website. Below are the current plugins implemented for Firefox OS and a brief description of how you can include them in your app. You can always see the current status of plugin development for the Firefox OS Platform by checking Jira on the Apache website.

    Notification API

    The notification API is used to alert the user of your app and is implemented in two plugins: org.apache.cordova.dialogs and org.apache.cordova.vibration. Currently we have implemented the alert, confirm, prompt and vibrate functions. To use this functionality, add the plugins to your project with the following commands:

    $ cordova plugin add org.apache.cordova.dialogs
    $ cordova plugin add org.apache.cordova.vibration

    To get proper styling of popup boxes in Firefox OS, you will need to add the notification.css file to your project. After adding the dialogs plugin, change to the project/plugins/org.apache.cordova.dialogs/www/firefoxos directory and copy the notification.css file to your project/www/css folder. Link the CSS file in head element of index.html.

    <link rel="stylesheet" type="text/css" href="css/notification.css" />

    You can now use the notification functions.

    function onPrompt(results) {
        alert("You selected button number " + 
              results.buttonIndex + 
              " and entered " + results.input1);
    }
    navigator.notification.vibrate(500);
    navigator.notification.prompt(
          'Enter Name', // message
           onPrompt, // callback to invoke
           'Prompt Test', // title
            ['Ok', 'Exit'], // buttonLabels
             'Doe, Jane' // defaultText
    );

    prompt

    Compass API

    The compass API is implemented using the org.apache.cordova.device-orientation plugin. This plugin implements the compass getCurrentHeading and watchHeading functions. To use it simply run the plugin add command:

    $ cordova plugin add org.apache.cordova.device-orientation

    Once the plugin is added, you can use the get or watch heading function to get compass information.

    function onSuccess(heading) {
        var element = document.getElementById('heading');
        myHeading = (heading.magneticHeading).toFixed(2);
        console.log("My Heading = " + myHeading);
    }
    function onError(compassError) {
        alert('Compass error: ' + compassError.code);
    }
    var options = {
        frequency: 500
    };
    watchID = navigator.compass.watchHeading(onSuccess, onError, options);

    compass

    Accelerometer API

    The Accelerometer is accessed using the org.apache.cordova.device-motion plugin and gives the developer access to acceleration data in x, y and z directions. This plugin implements the getCurrentAcceleration and watchAcceleration functions.

    To use these functions, add the device-motion plugin to your project by executing the following command.

    $ cordova plugin add org.apache.cordova.device-motion

    You can then monitor the acceleration values using code similar to this:

    var options = {
        frequency: 100
    };
    watchIDAccel = navigator.accelerometer.watchAcceleration(onSuccess, onError, options);
    function onSuccess(acceleration) {
      var acX = acceleration.x.toFixed(1) * -1;
      var acY = acceleration.y.toFixed(1);
      var acZ = acceleration.z.toFixed(1);
      var vals = document.getElementById('accvals');
      var accelstr = "<strong>Accel X: </strong>" + acX + "<br>" + "<strong>Accel Y: </strong>" + acY + "<br>" + "<strong>Accel Z: </strong>" + acZ;
      vals.innerHTML = accelstr;
    }
    function onError() {
      alert('Could not Retrieve Accelerometer Data!');
    }

    You can also monitor the device orienttation event and retrieve alpha, beta, and gamma rotation values like:

    function deviceOrientationEvent(eventData) {
        //skew left and right
        var alpha = Math.round(eventData.alpha);
        //front to back - neg back postive front
        var beta = Math.round(eventData.beta);
        //roll left positive roll right neg
        var gamma = Math.round(eventData.gamma);
        console.log("beta = " + beta + " gamma = " + gamma);
    }
    window.addEventListener('deviceorientation', deviceOrientationEvent);

    Camera API

    The camera API is used to retrieve an image from the gallery or from the device camera. This API is implemented in the org.apache.cordova.camera plugin. To use this feature, add the plugin to your project.

    $ cordova plugin add org.apache.cordova.camera

    In the Firefox OS implementation of this plugin, the getPicture function will trigger a Web Activity that allows the user to select where the image is retrieved.

    Code similar to the following can be used to execute the getPicture function:

    navigator.camera.getPicture(function (src) {
        var img = document.createElement('img');
        img.id = 'slide';
        img.src = src;
      }, function () {}, {
          destinationType: 1
    });

    Contacts API

    The contacts API is used to create or retrieve contacts on the device and is implemented in the org.apache.cordova.contacts plugin. To access this feature, run the following command:

    $ cordova plugin add org.apache.cordova.contacts

    Apps that access contacts must be privileged with the appropriate permission set in the manifest file. See “The Firefox OS Manifest” section earlier in this post to understand how to create a custom manifest for your application. For this API, you will need to add the following permissions to the manifest:

    "permissions": {
      "contacts": {
        "access": "readwrite",
        "description": "creates contacts"
      }
    }

    See the manifest documentation for specific access rights. In addition, you will need to change the type of app to privileged in the manifest.

    "type": "privileged",

    Once the manifest has been changed, you can add contacts with code like the following.

    // create a new contact object
    var contact = navigator.contacts.create();
    var name = new ContactName();
    name.givenName = fname;
    name.familyName = lname;
    contact.name = name;
    contact.save(onSuccess, onError);

    contact

    Geolocation API

    The geolocation API is used to retrieve the location, time and speed values from the devices GPS unit and is implemented in the org.apache.cordova.geolocation device plugin.

    $ cordova plugin add org.apache.cordova.geolocation

    You can retrieve the device latitude, longitude and a timestamp using this API on Firefox OS, but it does require the addition of a permission to the manifest file. See “The Firefox OS Manifest” section earlier in this post to understand how to create a custom manifest for your application.

    "permissions": {
        "geolocation": {
          "description": "Marking out user location"
        }
    }

    Adding this permission causes the app to prompt the user for permission to retrieve the GPS data. You can use either getCurrentPosition to read the GPS once or watchPosition to get an interval based update.

    var onSuccess = function (position) {
        console.log('Latitude: ' + position.coords.latitude + '\n' + 
        'Longitude: ' + position.coords.longitude + '\n'); 
    };
    function onError(error) {
        console.log('Error getting GPS Data');
    }
    navigator.geolocation.getCurrentPosition(onSuccess, onError);

    geolocation

    Join Us

    This post covered some of the basics of the new Firefox OS Cordova integration. We will continue to add more device APIs to the project, so stay tuned. If you are interested in working on the integration or need support for a specific device plugin, please contact us on Stack Overflow under the firefox-os tag or in #cordova on Mozilla IRC.

    In the meantime, if you have a Cordova app that makes use of the APIs discussed above, please try generating it for Firefox OS and submitting it to the Firefox Marketplace!

  8. Ember.JS – What it is and why we need to care about it

    This is a guest post by Sourav Lahoti and his thoughts about Ember.js

    Developers increasingly turn to client-side frameworks to simplify development, and there’s a big need for good ones in this area. We see a lot of players in this field, but for lots of functionality and moving parts, very few stand out in particular — Ember.js is one of them.

    So what is Ember.js? Ember.js is a MVC (Model–View–Controller) JavaScript framework which is maintained by the Ember Core Team (including Tom Dale, Yehuda Katz, and others). It helps developers create ambitious single-page web applications that don’t sacrifice what makes the web great: URI semantics, RESTful architecture, and the write-once, run-anywhere trio of HTML, CSS, and JavaScript.

    Why do we need to care

    Ember.js is tightly coupled with the technologies that make up the web today. It doesn’t attempt to abstract that away. Ember.js brings a clean and consistent application development model. If one needs to migrate from HTML to any other technology, Ember.js framework will evolve along with the current trends in web front end technology.

    It makes it very easy to create your own “component” and “template views” that are easy to understand, create and update. Coupled with its consistent way of managing bindings and computed properties, Ember.js does indeed offer much of the boilerplate code that a web framework needs.

    The core concept

    There are some nominal terms that you will find very common when you use ember.js and they form the basics of Ember.js:

    Routes
    A Route object basically represents the state of the application and corresponds to a url.
    Models
    Every route has an associated Model object, containing the data associated with the current state of the application.
    Controllers
    Controllers are used to decorate models with display logic.

    A controller typically inherits from ObjectController if the template is associated with a single model record, or an ArrayController if the template is associated with a list of records.

    Views
    Views are used to add sophisticated handling of user events to templates or to add reusable behavior to a template.
    Components
    Components are a specialized view for creating custom elements that can be easily reused in templates.

    Hands-on with Ember.js

    Data Binding:

    <script type=”text/x-handlebars”>
      <p>
        <label>Insert your name:</label>
        {{input type=”text” value=name}}
      </p>
     
      <p><strong>Echo: {{name}}</strong></p>
    </script>
    App = Ember.Application.create();

    Final result when the user interacts with the web app

    Ember.js does support data binding as we can see in the above example. What we type into the input is bound to name, as is the text after Echo: . When you change the text in one place, it automatically updates everywhere.

    But how does this happen? Ember.js uses Handlebars for two-way data binding. Templates written in handlebars get and set data from their controller. Every time we type something in our input, the name property of our controller is updated. Then, automatically, the template is updated because the bound data changed.

    A simple Visiting card demo using Handlebars

    We can create our own elements by using Handlebars.

    HTML

    <script type="text/x-handlebars">
     
      {{v-card myname=name street-address=address locality=city zip=zipCode email=email}}
     
      <h2 class="subheader">Enter Your information:</h2>
     
      <label>Enter Your Name:</label>
      {{input type="text" value=name}}
     
      <label>Enter Your Address:</label>
      {{input type="text" value=address}}
     
      <label>Enter Your City:</label>
      {{input type="text" value=city}}
     
      <label>Enter Your Zip Code:</label>
      {{input type="text" value=zipCode}}
     
      <label>Enter Your Email address:</label>
      {{input type="text" value=email}}
     
    </script>
     
    <script type="text/x-handlebars" data-template-name="components/v-card">
     
      <ul class="vcard">
        <li class="myname">{{myname}}</li>
        <li class="street-address">{{street-address}}</li>
        <li class="locality">{{locality}}</li>
        <li><span class="state">{{usState}}</span>, <span class="zip">{{zip}}</span></li>
        <li class="email">{{email}}</li>
      </ul>
     
    </script>

    CSS

    .vcard {
      border: 1px solid #dcdcdc;
      max-width: 12em;
      padding: 0.5em;
    }
     
    .vcard li {
      list-style: none;
    }
     
    .vcard .name {
      font-weight: bold;
    }
     
    .vcard .email {
      font-family: monospace;
    }
     
    label {
      display: block;
      margin-top: 0.5em;
    }

    JavaScript

    App = Ember.Application.create();
     
    App.ApplicationController = Ember.Controller.extend({
        name: 'Sourav',
        address: '123 M.G Road.',
        city: 'Kolkata',
        zipCode: '712248',
        email: 'me@me.com'
    });

    The component is defined by opening a new <script type="text/x-handlebars">, and setting its template name using the data-template-name attribute to be components/[NAME].

    We should note that the web components specification requires the name to have a dash in it in order to separate it from existing HTML tags.

    There is much more to it, I have just touched the surface. For more information, feel free to check out the Ember.js Guides.

  9. Live Editing Sass and Less in the Firefox Developer Tools

    Sass and Less are expressive languages that compile into CSS. If you’re using Sass or Less to generate your CSS, you might want to debug the source that you authored and not the generated CSS. Luckily you can now do this in the Firefox 29 developer tools using source maps.

    The Firefox developer tools use source maps to show the line number of rules in the original source, and let you edit original sources in the Style Editor. Here’s how to use the feature:

    1. Generate the source map

    When compiling a source to CSS, use the option to generate a sourcemap for each style sheet. To do this you’ll need Sass 3.3+ or Less 1.5+.

    Sass

    sass index.scss:index.css --sourcemap

    Less

    lessc index.less index.css --source-map

    This will create a .css.map source map file for each CSS file, and add a comment to the end of your CSS file with the location of the sourcemap: /*# sourceMappingURL=index.css.map */. The devtools will use this source map to map locations in the CSS style sheet to locations in the original source.

    2. Enable source maps in developer tools

    Right-click anywhere on the inspector’s rule view or in the Style Editor to get a context menu. Check off the Show original sources option:

    Enabling source maps in devtools

    Now CSS rule links will show the location in the original file, and clicking these links will take you to the source in the Style Editor:

    Original source showing in Style Editor

    3. Set up file watching

    You can edit original source files in Style Editor tool, but order to see the changes apply to the page, you’ll have to watch for changes to your preprocessed source and regenerate the CSS file each time it changes. To set watching up:

    Sass

    sass index.scss:index.css --sourcemap --watch

    Less

    For Less, you’ll have to set up another service to do the watching, like grunt.

    4. Save the original source

    Save the original source to your local file system by hitting the Save link or Cmd/Ctrl-S:

    Saving source to disk

    Saving source to disk

    The devtools will infer the location of the generated CSS file locally and watch that file for changes to update the live style sheet on the page.

    Now when you edit an original source and save it, the page’s style will update and you’ll get immediate feedback on your Sass or Less changes.

    The source has to be saved to disk and file watching set up in order for style changes to take effect.

  10. HTML5, CSS3, and the Bookmarklet that Shook the Web

    On Valentine’s Day last year we released a bookmarklet that went viral riding the popularity of the Harlem Shake meme. On the anniversary of its release we’d like to take a moment look back at the technical nuts and bolts of the bookmarklet as a case study in applying HTML5. In fact, the HTML, JavaScript, and CSS we used wouldn’t have worked on a single browser a few years ago. What follows is a technical discussion on how we took advantage of recent browser developments to shake up the web.

    Background

    Last year the Harlem Shake meme forced itself on to nearly every screen under the sun and, like everyone else, we had joked about doing our office version of the video. After tossing around a few bad video ideas, Ishan half-jokingly suggested a bookmarklet that made a web page do the Harlem Shake. Omar and Hari immediately jumped on the ingenuity of his idea and built a prototype within an hour that had the entire office LOLing. After pulling a classic all nighter we released it on February 14th, declaring “Happy Valentine’s Day, Internet! Behold, the Harlem Shake Bookmarklet”.

    Pretty soon it was picked up by news outlets like TechCrunch and HuffingtonPost, and our traffic skyrocketed. Meanwhile the bookmarklet offered a new avenue of expression in the watch-then-remix cycle that is the lifeblood of a viral meme like the Harlem Shake. Instead of creating a video of people dancing, developers could now remix this symbiotic meme in code. Startups like PivotDesk incorporated the bookmarklet into their websites, and HSMaker used the code to build a Harlem-Shake-As-A-Service website. Eventually, YouTube even built their own version as an easter egg on their site.

    So, how does it work?

    Once you click the Harlem Shake bookmark, a snippet of JS is evaluated on the webpage, just as you’d see by entering javascript:alert(“Hi MozHacks!”); in your address bar. This JavaScript will play the Harlem Shake audio, “shake” DOM nodes (according to timing events attached to the audio), and remove all DOM changes afterward.

    How did we attach the audio to the page and get the timing for the shakes just right?

    HTML5’s extensive audio support made this implementation fairly easy. All that was required was inserting an <audio> tag with the src pointed to the Harlem_Shake.ogg file. Once inserted into the DOM, the file would begin downloading, and playback begins once enough of the file has been buffered.

    HTML5 timed audio events allow us to know exactly when playback begins, updates, and ends. We attach a listener to the audio node which evaluates some JS once the audio reaches certain time. The first node starts shaking once the song is beyond 0.5s. Then, at 15.5s, we flash the screen and begin shaking all of the nodes. At 28.5s, we slow down the animations, and once the audio has ended, we stop all animations and clean up the DOM.

    audioTag.addEventListener("timeupdate", function() {
      var time = audioTag.currentTime,
          nodes = allShakeableNodes,
          len = nodes.length, i;
     
      // song started, start shaking first item
      if(time >= 0.5 && !harlem) {
        harlem = true;
        shakeFirst(firstNode);
      }
     
      // everyone else joins the party
      if(time >= 15.5 && !shake) {
        shake = true;
        stopShakeAll();
        flashScreen();
        for (i = 0; i < len; i++) {
          shakeOther(nodes[i]);
        }
      }
     
      // slow motion at the end
      if(audioTag.currentTime >= 28.4 && !slowmo) {
        slowmo = true;
        shakeSlowAll();
      }
    }, true);
     
    audioTag.addEventListener("ended", function() {
      stopShakeAll();
      removeAddedFiles();
    }, true);

    How did we choose which parts of the page to shake?

    We wrote a few helpers to calculate the rendered size of a given node, determine whether the node is visible on the page, and whether its size is within some (rather arbitrary) bounds:

    var MIN_HEIGHT = 30; // pixels
    var MIN_WIDTH = 30;
    var MAX_HEIGHT = 350;
    var MAX_WIDTH = 350;
     
    function size(node) {
      return {
        height: node.offsetHeight,
        width: node.offsetWidth
      };
    }
    function withinBounds(node) {
      var nodeFrame = size(node);
      return (nodeFrame.height > MIN_HEIGHT &&
              nodeFrame.height < MAX_HEIGHT &&
              nodeFrame.width > MIN_WIDTH &&
              nodeFrame.width < MAX_WIDTH);
    }
    // only calculate the viewport height and scroll position once
    var viewport = viewPortHeight();
    var scrollPosition = scrollY();
    function isVisible(node) {
      var y = posY(node);
      return (y >= scrollPosition && y <= (viewport + scrollPosition));
    }

    We got a lot of questions about how the bookmarklet was uncannily good at iniating the shake on logos and salient parts of the page. It turns out this was the luck of using very simple heuristics. All nodes are collected (via document.getElementsByTagName(“*”)) and we loop over them twice:

    1. On the first iteration, we stop once we find a single node that is within the bounds and visible on the page. We then start playing the audio with just this node shaking. Since elements are searched in the order they appear in the DOM (~ the order on the page), the logo is selected with surprising consistency.
    2. After inserting the audio, we have ~15 seconds to loop through all nodes to identify all shakeable nodes. These nodes get stored in an array, so that once the time comes, we can shake them.
    // get first shakeable node
    var allNodes = document.getElementsByTagName("*"), len = allNodes.length, i, thisNode;
    var firstNode = null;
    for (i = 0; i < len; i++) {
      thisNode = allNodes[i];
      if (withinBounds(thisNode)) {
        if(isVisible(thisNode)) {
          firstNode = thisNode;
          break;
        }
      }
    }
     
    if (thisNode === null) {
      console.warn("Could not find a node of the right size. Please try a different page.");
      return;
    }
     
    addCSS();
     
    playSong();
     
    var allShakeableNodes = [];
     
    // get all shakeable nodes
    for (i = 0; i < len; i++) {
      thisNode = allNodes[i];
      if (withinBounds(thisNode)) {
        allShakeableNodes.push(thisNode);
      }
    }

    How did we make the shake animations not lame?

    We utilized and tweaked Animate.css’s library to speed up the process, its light and easy to use with great results.

    First, all selected nodes gets a base class ‘harlem_shake_me’ that defines animation parameters for duration and how it should apply the styles.

    .mw-harlem_shake_me {
      -webkit-animation-duration: .4s;
         -moz-animation-duration: .4s;
           -o-animation-duration: .4s;
              animation-duration: .4s;
      -webkit-animation-fill-mode: both;
         -moz-animation-fill-mode: both;
           -o-animation-fill-mode: both;
              animation-fill-mode: both;
    }

    The second set of classes that defines the animation’s behavior are randomly picked and assigned to various nodes.

    @-webkit-keyframes swing {
      20%, 40%, 60%, 80%, 100% { -webkit-transform-origin: top center; }
      20% { -webkit-transform: rotate(15deg); } 
      40% { -webkit-transform: rotate(-10deg); }
      60% { -webkit-transform: rotate(5deg); }  
      80% { -webkit-transform: rotate(-5deg); } 
      100% { -webkit-transform: rotate(0deg); }
    }
     
    @-moz-keyframes swing {
      20% { -moz-transform: rotate(15deg); }  
      40% { -moz-transform: rotate(-10deg); }
      60% { -moz-transform: rotate(5deg); } 
      80% { -moz-transform: rotate(-5deg); }  
      100% { -moz-transform: rotate(0deg); }
    }
     
    @-o-keyframes swing {
      20% { -o-transform: rotate(15deg); }  
      40% { -o-transform: rotate(-10deg); }
      60% { -o-transform: rotate(5deg); } 
      80% { -o-transform: rotate(-5deg); }  
      100% { -o-transform: rotate(0deg); }
    }
     
    @keyframes swing {
      20% { transform: rotate(15deg); } 
      40% { transform: rotate(-10deg); }
      60% { transform: rotate(5deg); }  
      80% { transform: rotate(-5deg); } 
      100% { transform: rotate(0deg); }
    }
     
    .swing, .im_drunk {
      -webkit-transform-origin: top center;
      -moz-transform-origin: top center;
      -o-transform-origin: top center;
      transform-origin: top center;
      -webkit-animation-name: swing;
      -moz-animation-name: swing;
      -o-animation-name: swing;
      animation-name: swing;
    }

    Shake it like a polaroid picture

    What started a joke ended up turning into its own mini-phenomenon. The world has moved on from the Harlem Shake meme but the bookmarklet is still inspiring developers to get creative with HTML5.

    If you want to see the full source code or have suggestions, feel free to contribute to the Github repo!