Mozilla

Articles

Sort by:

View:

  1. It’s a wrap! “App Basics for FirefoxOS” is out and ready to get you started

    A week ago we announced a series of video tutorials around creating HTML5 apps for Firefox OS. Now we released all the videos and you can watch the series in one go.

    wrap
    Photo by Olliver Hallmann

    The series is aimed at web developers who want to build their first HTML5 application. Specifically it is meant to be distributed in the emerging markets, where Firefox OS is the first option to get an affordable smartphone and start selling apps to the audiences there.

    Over the last week, we released the different videos of the series – one each day:

    Yesterday we announced the last video in the series. For all of you who asked for the whole series to watch in one go, you now got the chance to do so.

    There are various resources you can use:

    What’s next?

    There will be more videos on similar topics coming in the future and we are busy getting the videos dubbed in other languages. If you want to help us get the word out, check the embedded versions of the videos on Codefirefox.com, where we use Amara to allow for subtitles.

    Speaking of subtitles and transcripts, we are currently considering both, depending on demand. If you think this would be a very useful thing to have, please tell us in the comments.

    Thanks

    Many thanks to Sergi, Jan, Jakob, Ketil, Nathalie and Anne from Telenor, Brian Bondy from Khan Academy, Paul Jarrat and Chris Heilmann of Mozilla to make all of this possible. Technologies used to make this happen were Screenflow, Amazon S3, Vid.ly by encoding.com and YouTube.

  2. Better integration for open web apps on Android

    Up until now, developing web apps on mobile has been a little tricky.

    After spending the time developing your app, getting your users to install it is difficult, especially when the concept of “installing a web app” is not very well defined.

    The most popular method is synonymous with adding a shortcut to the homescreen. This is problematic on a number of fronts, not least because the management of web apps – especially around launching, switching between and uninstalling web apps – differs significantly from that of native apps.

    The web app “exists” only on the homescreen, not in the app drawer.

    When it’s running, it is not clearly marked in the Recent Apps list.

    Even once you get something like a smooth user-flow of installing your app onto the user’s phone’s homescreen, you often find that your app is running in a degraded or out-of-date web view, missing out on compatibility or speed optimizations of a desktop class browser.

    What we as developers would like is a modern, fast web runtime, which is kept up-to-date on our devices.

    Wouldn’t it also be nice for our users to launch and manage their web apps in the same way as native apps?

    Introducing APK Factory

    We have been working on making web apps be real on the desktop for some time. On the desktop, if you install a web app, Firefox will repackage the app as a desktop app so that it will integrate perfectly with the rest of your system – as outlined in more detail in Progress report on cross-platform Open Web Apps.

    That means being in the Start menu on Windows, or in the Launch Control screen on Mac OS X.

    From Firefox 29, that will apply to Android too.

    This means that as a web developer, you can rely on a modern, up-to-date web runtime on Android to run your web apps. Even better, that web runtime is provided by an ordinary Android app, which means it will stay modern and up-to-date, and you can finally say goodbye to the Android Browser.

    A web app, called ShotClock. Notice its icon in the top right of the screen.

    The user will experience your web app as if it is a real native Android app:

    • The app appears in the App Drawer, the Recent Apps list, with its own names and icons,
    • The app can be installed and uninstalled just like a native Android app,
    • The app can be updated just like a native Android app.

    In the App Drawer

    In the Recent Apps list: all these apps are web apps

    Installed with certain permissions

    Best yet, is that we make these changes without the developer needing to do anything. As a developer, you get to write awesome web apps, and not worry about different packaging needed to deliver the web app to your users.

    So if you’re already making first-class apps for Firefox OS, you’re already making first-class apps for Android.

    The Technical details

    On Firefox, you can install an app using the window.navigator.mozApps.install(manifestUrl) method call. Any website can use this API, so any website can become an app store.

    The manifestUrl is the URL of a manifest.json document which describes the app to your phone without actually loading the app:

    • The app’s name and description, translated into any number of languages.
    • The app’s icon, in various sizes for different pixel densities.
    • The permissions that the app needs to run.
    • The WebActivities that the app wants to register.
    • For packaged apps only, this provides a URL to the zip file containing the app’s code and resources.

    On Firefox for Android, we implement this method by sending the URL to a Mozilla-managed service which builds an Android APK specifically for the app.

    APKs created by the Factory use Android’s excellent Resource framework so that the correct icon and translation is displayed to the user, respecting the user’s locale or phone screen.

    Web app permissions are rendered as Android permissions, so the user will have a completely native experience of installing your app.

    For packaged apps, the APK also includes a copy of the packaged zip file, so that no extra networking is required once the app is downloaded.

    For hosted apps, the first time the app is launched, the resources listed in its appcache are downloaded, so that subsequent launches can happen as quickly as possible, without requiring a network connection.

    And if you want to detect if the app is running in a web app versus in a webpage, checking the getSelf() method call will help you:

    if (window.navigator.mozApps) {
      // We're on a platform that supports the apps API.
      window.navigator.mozApps.getSelf().onsuccess = function() {
        if (this.result) {
          // We're running in an installed web app.
        } else {
          // We're running in an webpage.
          // Perhaps we should offer an install button.
        }
      };
    }

    Keeping your apps up-to-date

    For hosted apps, you update your apps as usual: just change your app on your server, and your users will pick up those changes the next time they run your app. You can change as much as you want, and your users will get the latest version of the app each time they launch and connect to your servers.

    For anything needing changes to the app’s manifest, your users will get an updated APK sent to them to update their existing installation.

    For example, if you want to change the app’s icon, or even name, changing the app’s manifest will cause the APK Factory service to regenerate the app’s APK, and notify your users that there is a new version available for install.

    For packaged apps, the same mechanism applies: change the app’s package zip file, then update the version number in the app’s manifest file, and the APK Factory will pick up those changes and notify your users that an updated app is available. Your users will get an notified that there’s a new APK to install. Simples.

    Is that it?

    This is an exciting project. It has very little web developer involvement, and no extra API for developers to use: however, it should represent a step forward in usability of web apps on Android.

    Now that we have taken the step to generate APKs for web apps, this gives us a platform for further blurring the lines between web apps and apps. Check it out for yourself and help us improve the feature: get Firefox Beta from the Google Play Store, install apps from the Firefox Marketplace, and let us know what you think!

  3. jsDelivr – The advanced open source public CDN

    This is a guest post by Dmitriy Akulov and his project jsDelivr. – Editor’s note.

    As a developer you are probably aware of Google Hosted Libraries. Google offers an easy and fast way to include 12 of the most popular js libraries in your websites.

    But what if you are a webmaster and you want take advantage of a fast CDN with other less popular projects too? Or if you are a developer and you want to make your project easier to access and use by other users.

    This is where jsDelivr comes into play. jsDelivr is a free and open source CDN created to help developers and webmasters. There are no popularity restrictions and all kinds of files are allowed, including JavaScript libraries, jQuery plugins, CSS frameworks, fonts and more.

    Adding a library

    To add a new library or update an existing one all the developer has to do is to clone our Github repository and apply the modifications they see fit. Once a moderator reviews the Pull Request and merges it, the files become instantly available from the official website.

    If a mod is online the approval should not take more than 20 minutes, otherwise it can take up to 10 hours until someone comes online. But once our auto-update utility comes online review times will drop.

    Reliability

    But what actually makes it so advanced? The idea of jsDelivr was not to create another public CDN but to offer a super fast and reliable infrastructure that developers and website owners could trust and use. Any big or small website can use it without worrying about it. There are no bandwidth limits and our service is rock solid.

    Slow responses, timeouts and downtime are not tolerated, so we designed a unique system to overcome these problems and offer a product that even Enterprise CDNs would be jealous of. Uptime and performance are top priority, we monitor everything at all times and we are always looking into new technologies and providers that may further improve our CDN.

    Infrastructure

    network-map

    Unlike the competition jsDelivr uses a unique Multi-CDN infrastructure to offer the best possible uptime and performance. The main backbone of it is built on top of CDN networks provided by MaxCDN and CloudFlare.

    And we also use custom servers in locations where CDNs have little or no presence. In total at this moment this results in 42 global POP locations. In the future we plan to add even more locations to offer top performance even in less popular countries.

    Of course lots of locations means nothing if you can’t load balance across them correctly. For the load balancing system we use services provided by Cedexis. One of their main features is the real time performance data they gather on all major CDN providers. 1.3 billion RUM (Real User Metrics) performance tests per day are processed and available to all Cedexis users.

    Measuring performance

    To gather these RUM tests they have deployed a special JavaScript code on thousands of websites. Every visitor to one of these websites executes the code and starts testing different CDN providers in the background as they browse the website. The testing does not impact on the browsing experience in any way and is completely transparent to the user. You can actually see how it works by visiting our website and opening developer tools to “Network” tab.

    The beauty of these tests is that they are not synthetic. They reflect the real performance real users will get if they download a file from one of those CDNs.

    The following information is then stored:

    • Performance metrics to each of our providers.
    • Availability metrics to each of our providers.
    • Browser’s User-Agent
    • First three octets of the user’s IP address

    Now that we have all this information we can use it in our smart load balancing algorithm.

    Every user gets a unique response that is based on their location and ISP provider. Each time a users requests to download a file from jsDelivr, our algorithm extracts the performance and availability data it has available for the last few minutes and then figures out the most optimal provider for that particular user and that particular time. All that in a few ms.

    First it makes sure that all available providers are online. For this it uses the RUM availability data and a synthetic test that checks each provider every minute for uptime. Then it proceeds to sort the providers by performance for the ISP of the user and his/her location.

    Once it has the fastest provider it returns the hostname to the user. So for example 2 different users in London with different ISPs could get 2 different responses because their ISPs have different routing and performance to different CDN providers. This smart system guarantees maximum uptime and fast loading times to all users. If a provider goes down jsDelivr won’t experience any issue at all and immediately start serving a different provider.

    This algorithm also immediately responds to performance degradation. For example a CDN provider gets DDoSed in Europe and their response times increase, jsDelivr will pick up the change and simply stop using this provider in Europe but still consider it for users in USA and other locations that were not affected by the attack.

    Don’t rely on a single CDN for uptime and speed. Everything can go down, but the chances for 2 CDNs and multiple servers to go down at the same time are very slim. And this is why jsDelivr is the most optimal solution for every website out there. No matter how big it is.

    I should also point out that MaxCDN, CloudFlare, Cedexis and the rest of the companies sponsor jsDelivr for free. Its nice to see that there are companies out there that are willing to help open source projects and build a fast and free internet.

    Advanced Features

    jsDelivr also supports some interesting and very helpful features such as:

    Version Aliasing

    Instead of using a unique URL for each version to load a project with jsDelivr you can use aliasing. Lets take for example the project Abaaso. At this moment the latest version is 3.10.50 and you can load it by specifying the exact version in your url as always. But since this project gets updated very often you would end up with using the old version pretty soon. To overcome this problem you can now simply use the following URL:

    //cdn.jsdelivr.net/abaaso/3.10/abaaso.min.js

    By using 3.10 you tell jsDelivr to load the latest version it has in the 3.10 branch which in this case is 3.10.50. This is the optimal solution for most authors because they can load the latest minor version without worrying for major changes that could break their website.

    It is of course possible to load the latest version in the v3 branch by using the following URL:

    //cdn.jsdelivr.net/abaaso/3/abaaso.min.js

    And if for any reason you need to always load the latest available version in any major branch you can use:

    //cdn.jsdelivr.net/abaaso/latest/abaaso.min.js

    By using the latest version you tell the server to load the absolute latest version it has. This of course is dangerous and given enough time may and will break your website. So use this feature with caution.

    Load multiple files with a single HTTP request

    jsDelivr is the first CDN to support this kind of functionality. You can load multiple files using a single HTTP request. Similar to combining and minifying js files in your own server, but cached by the huge and smart network of jsDelivr.

    All you have to do is to build your own URL with the projects and files you want to combine and their versions if needed. For example, to load the latest version for projects abaaso, ace and alloyui you would use the following syntax:

    //cdn.jsdelivr.net/g/abaaso,ace,alloyui

    Have in mind that loading the latest version is not recommended and given enough time will break your website. This is why you should specify the exact versions or use version aliases:

    //cdn.jsdelivr.net/g/jquery@2.1,angularjs@1.2

    So jquery@2.1 will load 2.1.0 and angularjs@1.2 will load 1.2.14. But the above URL will load the main files of each project and nothing else.

    If you want to load multiple files from a single project then you can do the following:

    //cdn.jsdelivr.net/g/jquery@2.1,angularjs@1.2.14(angular.min.js+angular-resource.min.js+angular-animate.min.js+angular-cookies.min.js+angular-route.min.js+angular-sanitize.min.js)

    If you want to load CSS then select css files using the above format. If all files in the group URL have a .css extension then the server will automatically respond with a Content-Type: text/css HTTP header. In all other cases (for /g/ URLs) Content-Type: application/javascript is used.

    Next you simply include the url in your website and you are done. Less DNS resolving, less TCP connections, less HTTP requests = Faster website.

    You can even use this feature to offer your users a builder to allow them to generate a URL with the modules they need and then load them all using a fast CDN.

    A real API

    jsDelivr has a fully featured API that can be used by developers in their websites,to create custom modules and anything else you might think of https://github.com/jsdelivr/api

    You can request exactly what you need using our API without downloading a huge package json. And it also supports cdnjs and Google. This way developers have everything they need to build their applications.

    Auto-Updates

    jsDelivr libgrabber is a utility that is going to run on our servers and can auto-update all hosted projects if configured. The best part is that the authors don’t have to change anything in their repos. All changes are made on jsDelivr side.

    All you need is to create an update.json file with some basic info inside the project you want to keep auto-updated in jsDelivr repo. This file also supports multiple sources for new versions. Like npm, bower and directly Github repos. It is still under development but is planned to be released soon.

    Try it out, help out!

    jsDelivr is a very interesting project that I enjoy developing and making better. It also heavily relies on the help of the community. Consider using it in your websites and host there your projects.

    And if you are interested in helping out, we can always use some help, just join the conversation on Github.

    Feel free to leave your comments and ask me any questions you might have.

    Thank you

  4. Introducing the Canvas Debugger in Firefox Developer Tools

    The Canvas Debugger is a new tool we’ll be demoing at the Game Developers Conference in San Francisco. It’s a tool for debugging animation frames rendered on a Canvas element. Whether you’re creating a visualization, animation or debugging a game, this tool will help you understand and optimize your animation loop. It will let you debug either a WebGL or 2D Canvas context.

    Canvas Debugger Screenshot

    You can debug an animation using a traditional debugger, like our own JavaScript Debugger in Firefox’ Developer Tools. However, this can be difficult as it becomes a manual search for all of the various canvas methods you may wish to step through. The Canvas Debugger is designed to let you view the rendering calls from the perspective of the animation loop itself, giving you a much better overview of what’s happening.

    How it works

    The Canvas Debugger works by creating a snapshot of everything that happens while rendering a frame. It records all canvas context method calls. Each frame snapshot contains a list of context method calls and the associated JavaScript stack. By inspecting this stack, a developer can trace the call back to the higher level function invoked by the app or engine that caused something to be drawn.

    Certain types of Canvas context functions are highlighted to make them easier to spot in the snapshot. Quickly scrolling through the list, a developer can easily spot draw calls or redundant operations.

    Canvas Debugger Call Highlighting Detail

    Each draw call has an associated screenshot arranged in a timeline at the bottom of the screen as a “film-strip” view. You can “scrub” through this film-strip using a slider to quickly locate a draw call associated with a particular bit of rendering. You can also click a thumbnail to be taken directly to the associated draw call in the animation frame snapshot.

    Canvas Debugger Timeline Picture

    The thumbnail film-strip gives you get a quick overview of the drawing process. You can easily see how the scene is composed to get the final rendering.

    Stepping Around

    You might notice a familiar row of buttons in the attached screenshot. They’ve been borrowed from the JavaScript Debugger and provide the developer a means to navigate through the animation snapshot. These buttons may change their icons at final release, but for now, we’ll describe them as they currently look.

    Canvas Debugger Buttons image

    • “Resume” – Jump to the next draw call.
    • “Step Over” – Goes over the current context call.
    • “Step Out” – Jumps out of the animation frame (typically to the next requestAnimationFrame call).
    • “Step In” – Goes to the next non-context call in the JavaScript debugger

    Jumping to the JavaScript debugger by “stepping in” on a snapshot function call, or via a function’s stack, allows you to add a breakpoint and instantly pause if the animation is still running. Much convenience!

    Future Work

    We’re not done. We have some enhancements to make this tool even better.

    • Add the ability to inspect the context’s state at each method call. Highlight the differences in state between calls.
    • Measure Time spent in each draw call. This will readily show expensive canvas operations.
    • Make it easier to know which programs and shaders are currently in use at each draw call, allowing you to jump to the Shader Editor and tinkering with shaders in real time. Better linkage to the Shader Editor in general.
    • Inspecting Hit Regions by either drawing individual regions separately, colored differently by id, or showing the hit region id of a pixel when hovering over the preview panel using the mouse.

    And we’re just getting started. The Canvas Debugger should be landing in Firefox Nightly any day now. Watch this space for news of its landing and more updates.

  5. Flambe Provides Support For Firefox OS

    Flambe is a performant cross-platform open source game engine based on the Haxe programming language. Games are compiled to HTML5 or Flash and can be optimized for desktop or mobile browsers. The HTML5 Renderer uses WebGL, but provides fallback to the Canvas tag and functions nicely even on low-end phones. Flash Rendering uses Stage 3D and native Android and iOS apps are packaged using Adobe AIR.

    Flambe provides many other features, including:

    • simple asset loading
    • scene management
    • touch support
    • complete physics library
    • accelerometer access

    It has been used to create many of the Nickelodeon games available at nick.com/games and m.nick.com/games. To see other game examples, and some of the other well-known brands making use of the engine, have a look at the Flambe Showcase.

    In the last few weeks, the developers of the Flambe engine have been working to add support for Firefox OS. With the 4.0.0 release of Flambe, it is now possible to take Flambe games and package them into publication-ready Firefox OS applications, complete with manifest.

    Firefox Marketplace Games

    To get an idea of what is possible with the Flambe engine on the Firefox OS platform, take a look at two games that were submitted recently to the Firefox Marketplace. The first — The Firefly Game written by Mark Knol — features a firefly that must navigate through a flock of hungry birds. The game’s use of physics, sound and touch are very effective.
    firefly

    The second game, entitled Shoot’em Down, tests the player’s ability to dodge fire while shooting down as many enemy aircraft as possible. The game was written by Bruno Garcia, who is the main developer of the Flambe engine. The source for this game is available as one of the engine’s demo apps.
    shootemup

    Building a Firefox OS App using Flambe

    Before you can begin writing games using the Flambe engine, you will need to install and setup a few pieces of software:

    1. Haxe. Auto installers are available for OSX, Windows and Linux on the download page.
    2. Node.js for building projects. Version 0.8 or greater is required
    3. A Java runtime.

    Once those prerequisites are met, you can run the following command to install Flambe:

    # Linux and Mac may require sudo
    npm install -g flambe 
    flambe update

    This will install Flambe and you can begin writing apps with the engine.

    Create a Project

    To create a new project, run the following command.

    flambe new

    This will create a directory named whatever you supplied for ProjectName. In this directory you will have several files and other directories for configuring and coding your project. By default the new command creates a very simple project that illustrates loading and animating an image.

    A YAML (flambe.yaml) file within the project directory defines several characteristics of the project for build purposes. This file contains tags for developer, name and version of the app, and other project meta-data, such as description. In addition it contains the main class name that will be fired as the entry point to your application. This tag needs to be set to a fully qualified Haxe Class name. I.e., if you use a package name in your Haxe source file, you need to prepend the package name in this tag like this: packagename.Classname. (The default example uses urgame.Main.) You can also set the orientation for your app within the YAML file.

    Of specific note for Firefox OS developers, a section of the YAML file contains a partial manifest.webapp that can be altered. This data is merged into a complete manifest.webapp when the project is built.

    The main project folder also contains a directory for assets (images, sounds, animations, and particle effects files). The icons folder contains the icons that will be used with your app. The src folder contains the Haxe source code for your application.

    Build the Project

    Flambe provides a build method to compile your code to the appropriate output. To build the app run:

    flambe build <output>

    Where output is html, flash, android, ios, or firefox. Optionally you can add the –debug option to the build command, producing output more suitable for debugging. For Firefox OS this will produce non-minified JavaScript files. The build process will add a build directory to your application. Inside of the build directory a firefox directory will be created containing your Firefox OS app.

    Debug the Project

    You can debug your application in the Firefox App Manager. See Using the App Manager for details on installing and debugging using the App Manager. Within the App Manager you can add the built app using the Add Packaged App button and selecting the ProjectName/build/firefox directory. Debugging for other platforms is described in the Flambe documentation.
    appmanager
    The -debug option can provide additional insight for debugging and performance tuning. In addition to being able to step through the generated JavaScript, Flambe creates a source map that allows you to look look through the original Haxe files while debugging.
    debugsession
    To see the original Haxe files in the debugger, select the Debugger options icon in the far right corner of the debugger and choose Show Original Sources.
    sourcemap
    Also, when using the -debug option you can use a shortcut key (Ctrl + O) to initiate a view of your app that illustrates overdraw — this measures the number of times a pixel is being drawn in a frame. The brighter the pixel the more times it is being drawn. By reducing the amount of overdraw, you should be able to improve the performance of your game.
    overdraw

    A Bit about Haxe and Flambe

    Haxe is an object-oriented, class-based programing language that can be compiled to many other languages. In Flambe, your source code needs to be written using Haxe-specific syntax. Developers familiar with Java, C++ or JavaScript will find learning the language relatively straightforward. The Haxe website contains a reference guide that nicely documents the language. For editing, there are many options available for working with Haxe. I am using Sublime with the Haxe plugin.

    Flambe offers some additional classes that need to be used when building your app. To get a better understanding of these classes, let’s walk through the simple app that is created when you run the flambe new command. The Main.hx file created in the source directory contains the Haxe source code for the Main Class. It looks like this:

    package urgame;
     
    import flambe.Entity;
    import flambe.System;
    import flambe.asset.AssetPack;
    import flambe.asset.Manifest;
    import flambe.display.FillSprite;
    import flambe.display.ImageSprite;
     
    class Main
    {
      private static function main ()
      {
        // Wind up all platform-specific stuff
        System.init();
     
        // Load up the compiled pack in the assets directory named "bootstrap"
        var manifest = Manifest.fromAssets("bootstrap");
        var loader = System.loadAssetPack(manifest);
        loader.get(onSuccess);
      }
     
      private static function onSuccess (pack :AssetPack)
      {
        // Add a solid color background
        var background = new FillSprite(0x202020, System.stage.width, System.stage.height);
        System.root.addChild(new Entity().add(background));
     
        // Add a plane that moves along the screen
        var plane = new ImageSprite(pack.getTexture("plane"));
        plane.x._ = 30;
        plane.y.animateTo(200, 6);
        System.root.addChild(new Entity().add(plane));
      }
    }

    Haxe Packages and Classes

    The package keyword provides a way for classes and other Haxe data types to be grouped and addressed by other pieces of code, organized by directory. The import keyword is used to include classes and other Haxe types within the file you are working with. For example, import flambe.asset.Manifest will import the Manifest class, while import flambe.asset.* will import all types defined in the asset package. If you try to use a class that you have not imported into your code and run the build command, you will receive an error message stating that the particular class could not be found. All of the Flambe packages are documented on the Flambe website.

    Flambe Subsystem Setup and Entry point

    The main function is similar to other languages and acts as the entry point into your app. Flambe applications must have one main function and only one per application. In the main function the System.init() function is called to setup all the subsystems that will be needed by your code and the Flambe engine.

    Flambe Asset Management

    Flambe uses a dynamic asset management system that allows images, sound files, etc. to be loaded very simply. In this particular instance the fromAssets function defined in the Manifest class examines the bootstrap folder located in the assets directory to create a manifest of all the available files. The loadAssetPack System function creates an instance of the AssetPack based on this manifest. One of the functions of AssetPack is get, which takes a function parameter to call when the asset pack is loaded into memory. In the default example, the only asset is an image named plane.png.

    Flambe Entities and Components

    Flambe uses an abstract concept of Entities and Components to describe and manipulate game objects. An Entity is essentially just a game object with no defining characteristics. Components are characteristics that are attached to entities. For example an image component may be attached to an entity. Entities are also hierarchal and can be nested. For example, entity A can be created and an image could be attached to it. Entity B could then be created with a different image. Entity A could then be attached to the System root (top level Entity) and Entity B could then be attached to Entity A or the System root. The entity nest order is used for rendering order, which can be used to make sure smaller visible objects are not obscured by other game objects.

    Creating Entities and Components in the Sample App

    The onSuccess function in the default sample is called by the loader instance after the AssetPack is loaded. The function first creates an instance of a FillSprite Component, which is a rectangle defined by the size of the display viewport width and height. This rectangle is colored using the hex value defined in the first parameter. To actually have the FillSprite show up on the screen you first have to create an Entity and add the Component to it. The new Entity().add(background) method first creates the Entity and then adds the FillSprite Component. The entire viewport hierarchy starts at the System.root, so the addChild command adds this new Entity to the root. Note this is the first Entity added and it will be the first rendered. In this example this entity represents a dark background.

    Next the plane image is created. This is done by passing the loaded plane image to the ImageSprite Component constructor. Note that the AssetPack class’s getTexture method is being used to retrieve the loaded plane image. The AssetPack class contains methods for retrieving other types of Assets as well. For example, to retrieve and play a sound you would use pack.getSound("bounce").play();.

    Flambe Animated Data Types

    Flambe wraps many of the default Haxe data types in classes and introduces a few more. One of these is the AnimatedFloat class. This class essentially wraps a float and provides some utility functions that allow the float to be altered in a specific way. For example, one of the functions of the AnimatedFloat class is named animateTo, which takes parameters to specify the final float value and the time in which the animation will occur. Many components within the Flambe system use AnimatedFloats for property values. The plane that is loaded in the default application is an instance of the ImageSprite Component. Its x and y placement values are actually AnimatedFloats. AnimatedFloat values can be set directly but special syntax has to be used (value._).

    In the example, the x value for the ImageSprite is set to 30 using this syntax: plane.x._ = 30;. The y value for the ImageSprite is then animated to 200 over a 6 second period. The x and y values for an ImageSprite represent the upper left corner of the image when placed into the viewport. You can alter this using the centerAnchor function of the ImageSprite class. After this call, the x and y values will be in reference to the center of the image. While the default example does not do this, it could be done by calling plane.centerAnchor();. The final line of code just creates a new Entity, adds the plane Component to the Entity and then adds the new Entity to the root. Note that this is the second Entity added to the root and it will render after the background is rendered.

    Flambe Event Model

    Another area of Flambe that is important to understand is its event model. Flambe uses a signal system where the subsystems, Components and Entities have available signal properties that can be connected to in order to listen for a specific signal event. For example, resizing the screen fires a signal. This event can be hooked up using the following code.

    System.stage.resize.connect(function onResize() {
      //do something 
    });

    This is a very nice feature when dealing with other components within apps. For example, to do something when a user either clicks on or touches an ImageSprite within your app you would use the following code:

    //ImageSprite Component has pointerDown signal property
    myBasketBallEntity.get(ImageSprite).pointerDown.connect(function (event) {
        bounceBall();
    });

    In this case the pointerDown signal is fired when a user either uses a mouse down or touch gesture.

    Demo Apps

    The Flambe repository also contains many demo apps that can be used to further learn the mechanics and APIs for the engine. These demos have been tested on Firefox OS and perform very well. Pictured below are several screenshots taken on a Geeksphone Keon running Firefox OS.
    colla

    Of particular note in the demos are the physics and particles demos. The physics demo uses the Nape Haxe library and allows for some very cool environments. The Nape website contains documentation for all the packages available. To use this library you need to run the following command:

    haxelib install nape

    The particle demo illustrates using particle descriptions defined in a PEX file within a Flambe-based game. PEX files can be defined using a particle editor, like Particle Designer.

    Wrapping Up

    If you are a current Flambe game developer with one or more existing games, why not use the new version of the engine to compile and package them for Firefox OS? If you are a Firefox OS developer and are looking for a great way to develop new games for the platform, Flambe offers an excellent means for developing engaging, performant games for Firefox OS–and many other platforms besides!

    And, if you are interested in contributing to Flambe, we’d love to hear from you as well.

  6. App basics for Firefox OS – a screencast series to get you started

    Over the next few days we’ll release a series of screencasts explaining how to start your first Open Web App and develop for Firefox OS.

    Firefox OS - Intro and hello

    Each of the screencasts is terse enough to watch in a short break and the whole series should not take you more than an hour of your time. The series features Jan Jongboom (@janjongboom), Sergi Mansilla (@sergimansilla) of Telenor Digital and Chris Heilmann (@codepo8) of Mozilla and was shot in three days in Oslo, Norway at the offices of Telenor Digital in February 2014.

    Here are the three of us telling you about the series and what to expect:

    Firefox OS is an operating system that brings the web to mobile devices. Instead of being a new OS with new technologies and development environments it builds on standardised web technologies that have been in use for years now. If you are a web developer and you want to build a mobile app, Firefox OS gives you the tools to do so, without having to change your workflow or learn a totally new development environment. In this series of short videos, developers from Mozilla and Telenor met in Oslo, Norway to explain in a few steps how you can get started to build applications for FirefoxOS. You’ll learn:

    • how to build your first application for Firefox OS
    • how to debug and test your application both on the desktop and the real device
    • how to get it listed in the marketplace
    • how to use the APIs and special interfaces Firefox OS offers a JavaScript developer to take advantage of the hardware available in smartphones.

    In addition to the screencasts, you can download the accompanying code samples from GitHub . If you want to try the code examples out for yourself, you will need to set up a very simple development environment. All you need is:

    • A current version of Firefox (which comes out of the box with the developer tools you need) – we recommend getting Firefox Aurora or Nightly if you really want to play with the state-of-the-art technology.
    • A text editor – in the screencasts we used Sublime Text, but any will do. If you want to be really web native, you can try Adobe Brackets.
    • A local server or a server to push your demo files to. A few of the demo apps need HTTP connections instead of local ones.

    sergi and chris recording

    Over the next few days we’ll cover the following topics:

    In addition to the videos, you can also go to the Wiki page of the series to get extra information and links on the subjects covered.

    Come back here to see the links appear day by day or follow us on Twitter at @mozhacks to get information when the next video is out.

    jan recording his video

    Once the series is out, there’ll be a Wiki resource to get them all in one place. Telenor are also working on getting these videos dubbed in different languages. For now, stay tuned.

    Many thanks to Sergi, Jan, Jakob, Ketil, Nathalie and Anne from Telenor to make all of this possible.

  7. Audio Tags: Web Components + Web Audio = ♥

    Article written by Soledad Penadés, edited by Angelina Fabbro.

    Last week we released Brick 1.0, our carefully curated set of web components for rapid development. Using components makes it very easy to use and integrate these UI widgets with existing code and frameworks.

    And this week we bring you Audio Tags, an experiment building Web Components that represent Web Audio blocks that let us construct a complete instrument with an interface to play it. With reusable audio blocks, developers can experiment with Web Audio without having to write a lot of boilerplate code.

    Let’s build a simple synthesiser to demonstrate how the different tags work together!

    The Audio Context

    The first thing we need is an audio context. If you’ve ever done any Canvas programming, this will sound familiar. The context is akin to a toolbox: it’s got the functions (the tools) that you need and it’s also where everything happens. All other audio tags will be placed inside a context.

    This is how an audio context looks like when using Audio Tags:

    <audio-context>
    </audio-context>

    That’s it!

    Oscillators

    While being able to create an audio context by typing just one tag declaration is great, it is not particularly exciting if we can’t get any audible output. We need something that generates a sound, and for this we’ll star with something simple and use an oscillator. As the name implies, the output is a signal that oscillates between two values: -1 and 1, generating a periodic waveform. We will place it inside an audio context to have its output automatically routed via the context’s output to the computer’s speakers:

    <audio-context>
        <audio-oscillator>
        </audio-oscillator>
    </audio-context>

    Context with oscillator

    (See it live).

    In the real world, oscillators can generate different signal shapes. Likewise, in the Web Audio world we have analogous wave types that we can use: sine, square, sawtooth, and triangle. Since web components are first class DOM elements, we can specify the desired wave type by using an attribute:

    <audio-oscillator type="square"></audio-oscillator>

    You could even change it live by opening the console and typing this:

    document.querySelector('audio-oscillator').type = 'square';

    Similarly, you can also change the frequency the oscillator is running at by setting the frequency attribute:

    <audio-oscillator frequency="220"></audio-oscillator>

    Mixer

    Having a running oscillator is just the first step. Most synthesisers available have more than one oscillator playing at the same time to make the sound more complex and nuanced. We need some way of playing two or more sounds in parallel, while combining them into a single output.

    This is commonly known as mixing audio, and therefore we need a mixer:

    <audio-context>
        <audio-mixer>
            <audio-oscillator frequency="220"></audio-oscillator>
            <audio-oscillator frequency="440"></audio-oscillator>
        </audio-mixer>
    </audio-context>

    The mixer will take the output of each of its children, and join them together to form its own output, which is then connected to the context’s output. Note also that since we’re dealing with DOM elements, when we say “children” we literally mean the mixer’s DOM children elements.

    Mixer

    (Example).

    Chain (and oscilloscope)

    When you start adding multiple sounds it’s useful to be able to see what is going on in the synthesiser. What if we could, somehow, plug a component between the output of one child and the input of another, and display what the sound wave looks like at that point?

    We can’t do that with the mixer, because it just joins all the outputs together. We need a new abstract structure: chains. An audio chain will connect the output of its first children to the input of the second children, and the output of the second children to the input of the third one, and so on, until we reach the last children and just connect its output to the chain output.

    Or in other words: while the mixer connects things in parallel, the chain connects them serially.

    Let’s connect a new element –the oscilloscope– to the output of an oscillator, using a chain. The oscilloscope will just display what is connected to its input, and the signal will pass through to its output without being modified at all. You can change the oscillator’s wave type to square, and see how the oscilloscope changes its display accordingly.

    <audio-context>
        <audio-chain>
            <audio-oscillator frequency="220"></audio-oscillator>
            <audio-oscilloscope></audio-oscilloscope>
        </audio-chain>
    </audio-context>

    Chain

    (Example).

    Filter

    Synthesisers don’t limit themselves to just running several oscillators at the same time. They often add postprocessing units to this raw generated audio, which give the synthesiser its own distinctive sound.

    There are many types of postprocessing effects, and some of the most popular are filters, which roughly work by highlighting certain frequencies or removing others. For example, we can chain a low pass filter to the output of an oscillator, and that would only allow the lower frequencies to go through. This produces a sort of dampening effect, as if we had put on some ear muffs, because higher frequencies travel through the air and we hear them with our ear pavilions, while lower frequencies tend to travel through the earth and objects too. So rather than hearing them, we feel them with our body, and it doesn’t matter whether you have something over your ears or not.

    <audio-context>
        <audio-chain>
            <audio-oscillator frequency="220"></audio-oscillator>
            <audio-filter type="lowpass"></audio-filter>
        </audio-chain>
    </audio-context>

    Filter

    (Example).

    Web Audio natively implements biquad pole filters, and as happens with the audio-oscillator tag, you can alter the filter behaviour by setting its type attribute. For example:

    <audio-filter type="highpass"></audio-filter>

    You could even insert several oscilloscopes: one before and another after a filter, to see the effect the filter has on the signal:

    <audio-context>
        <audio-chain>
            <audio-oscillator frequency="220"></audio-oscillator>
            <audio-oscilloscope></audio-oscilloscope>
            <audio-filter type="lowpass"></audio-filter>
            <audio-oscilloscope></audio-oscilloscope>
        </audio-chain>
    </audio-context>

    Filter with two oscilloscopes

    (Example).

    And finally, the minisynth

    We have enough components to build a synthesiser now! We want two oscillators playing together (one an octave higher than the other), and a filter to make the sound a little bit less harsh and more “self-contained”. So, without further ado, this is the structure for representing our minimal synth, the <mini-synth>, using the components we’ve introduced so far:

    <audio-chain>
        <audio-mixer>
            <audio-oscillator></audio-oscillator>
            <audio-oscillator></audio-oscillator>
        </audio-mixer>
        <audio-filter type="lowpass"></audio-filter>
    </audio-chain>

    For the sake of comparison, this is more or less how we would assemble a similar setup using raw Web Audio API objects and functions:

    var mixerGain = context.createGain();
     
    var osc1 = context.createOscillator();
    var osc2 = context.createOscillator();
    osc1.connect(mixerGain);
    osc2.connect(mixerGain);
     
    var filter = context.createBiquadFilter();
    mixerGain.connect(filter);
     
    // and the actual output is at *filter*

    It’s not that the code is particularly complicated, it just doesn’t have the nice visual hierarchy of the declarative syntax. The visual cues from the syntax make understanding the relationship between elements quick and easy.

    We still need a few lines of JavaScript to make the <mini-synth> component behave like a synthesiser: it has to start and stop both oscillators at the same time. We can take advantage from the fact that the AudioTag prototype has some common base methods that we can overload to get specific behaviours in our custom components.

    In this particular case we’ll overload the start and stop methods to make the oscillators start and stop playing respectively when we call those methods in the synth. This way we abstract the internals of the synthesiser from the world, while still exposing a consistent interface.

    start: function(when) {
        // We want to make sure we don't clip (i.e. go under -1 or over 1),
        // so we'll divide the gain by the number of oscillators in the synth
        var oscGain = this.oscillators.length > 0 ? 1.0 / this.oscillators.length : 1.0;
        this.oscillators.forEach(function(osc) {
            osc.gain = oscGain;
            osc.start(when);
        });
    }
     
    stop: function(when) {
        this.oscillators.forEach(function(osc) {
            osc.stop(when); 
        });
    }

    The implementation should be fairly easy to follow.

    You might be wondering about the when parameter. It is used to tell the browser when to actually start the action, so that you can schedule various events in the future with accurate timing. It means “execute this code at when milliseconds”. In our case we’re just using a value of 0, which means “do that immediately”. I advise you to read more about when in the Web Audio spec.

    We also need to implement a method for actually telling the synthesiser which note to play, or in other words, which frequency should each oscillator be running at. So let’s implement noteOn:

    noteOn: function(noteNumber) {
        this.oscillators.forEach(function(osc, index) {
            // Each oscillator should play in a higher octave
            // Each octave is composed of 12 notes
            var oscNoteNumber = noteNumber + 12 * index;
            // We're using a library to convert note numbers to frequencies
            var frequency = MIDIUtils.noteNumberToFrequency(oscNoteNumber);
            osc.frequency = frequency; 
        });
    }

    You don’t need to use MIDIUtils, but it comes handy if you ever want to jam with an instrument in your browser and someone else using a more traditional MIDI instrument. By using standard frequencies you can be sure that both your instruments will be tuned in the same pitch, and that is GOOD.

    We also need a way for triggering notes in the synthesiser, so what better way than having an on screen keyboard component?

    <audio-keyboard octaves="2"></audio-keyboard>

    will insert a keyboard component with 2 octaves. Once the keyboard gets focus (by clicking on it) you can tap keys on your computer’s keyboard and it will emit noteon events. If we listen to those, we can then send them to the synthesiser. And the same goes for the noteoff events:

    keyboard.addEventListener('noteon', function(e) {
     
      var noteIndex = e.detail.index;
      // 48 is the base note here = C-3
      minisynth.noteOn(parseInt(noteIndex, 10) + 48);
      minisynth.start();
     
    }, false);
     
    keyboard.addEventListener('noteoff', function(e) {
     
      minisynth.noteOff();
     
    }, false);

    So it is DEMO time!

    Minisynth

    And now that we have a synthesiser we can say we’re rockstars! But rockstars need to look cool. Real-life rockstars have their own signature guitars and customised cabinets. And we have… CSS! We can go as wild as we want with CSS, so just press the Become a rockstar button on the demo and watch as the synthesiser becomes something else thanks to the magic of CSS.

    Minisynth, rockstarified

    Looking behind the curtains

    So far we’ve only talked about these fancy new audio tags and assumed that they are magically available in your browser, even though it is obvious they are non-standard elements. We haven’t explained where they come from. Well, if you’ve read this far already you deserve to be shown the secrets of the kingdom!

    If you look at the source code of any the examples, you’ll notice that we’re consistently including the AudioTags.bundle.js (line 18) and AudioTags.bundle.css (line 6) files. The CSS is not particularly exciting and the real magic happens in the JavaScript. This file includes a couple of utility libraries that give us the ability to define custom tags in the browser, and then the code for defining and making available these new tags in the browser.

    On the utility side, we first include AudioContext-MonkeyPatch, for unifying Web Audio API disparities between browsers and enabling us to use a modern, consistent syntax. If you want to know more about writing portable Web Audio code, you can have a look at this article.

    The second library we’re including is X-Tag, and more specifically, its very innermost core. X-Tag is a custom elements polyfill, and custom elements are a part of the emerging Web Components spec, meaning this stuff will be built right into the browser soon. X-Tag is the same library that Mozilla Brick uses. You can learn how to make your own custom elements with this article.

    That said, if you plan to use Brick and Audio Tags in the same project, a disaster might probably ensue, since both Brick and Audio Tags include X-Tag’s core in their distribution bundles. The authors of both libraries are discussing what’s the best way to proceed about this, but we haven’t settled on any action yet, because Audio Tags is such a newcomer to the X-Tag powered library-scene. In any case, the most likely outcome is that we’ll offer an option to build Brick and Audio Tags without including X-Tag core.

    Also, here is a video of this same material at CascadiaJS, so you can watch someone build it right in front of you. It may help your understanding of the topic:

    What’s next for Audio Tags?

    Many people have been asking me what’s next for Audio Tags. What are the upcoming features? What have I planned? How do you contribute? How do we go about adding new tags?

    To be honest? I have no idea! But that’s the beauty of it. This is just a starting point, an invitation to think, play with and discuss about this notion of declarative audio components. There’s, of course, a list of things that don’t work yet, some random ideas and maybe potential features in the Audio Tags’ README file. I will probably keep extending it and filling the gaps–it is a good playground for experimenting with audio without getting too messy, and also a good test for Web Components that go past the usual “encapsulated UI widgets on steroids” notion.

    Some people have found the project inspiring in itself; others thought that it would be useful for teaching signal processing, others mixed it with accelerometer data to create physically-controlled synthesisers, and others decided to ditch the audio side of it and just build custom components for WebRTC purposes. It’s up to each one of you to contribute if you feel like doing so!

  8. Create Add-ons for Australis to Win a Firefox OS Phone

    Firefox 29 (“Australis”) includes significant design and customization improvements, and we’re challenging you to create add-ons that look and feel great in it.

    Between March 11 – April 15, 2014, create add-ons that take full advantage of the new design, which opens up new customization opportunities and streamlines the add-on experience in your browser. A panel of judges will pick one winner and two runners-up from each prize category.

    All winners will receive Firefox OS phones, and the first-prize winners in each category will also receive a collection of Mozilla gear.

    The Categories

    • Best overall add-on – an add-on that best makes use of the new Australis features, like the new toolbar widgets and tab appearance.
    • Best complete theme – a complete theme that most creatively alters the look and feel of Australis.
    • Best bookmark add-on – an innovative bookmarking add-on that works well with the Australis theme.

    Add-ons in Australis

    In order to create great entries for this contest you will need to know what’s new in Australis. Following is a quick summary of what we’ve been publishing in the Add-ons Blog.

    Changes

    The toolbars have changed significantly in Australis. The Add-on Bar at the bottom has been removed, and instead there is a new menu panel that extends the toolbar with buttons and widgets. It is activated by clicking on the button at the right end of the main toolbar. All the items in this new menu are customizable and it’s possible to add add-on buttons and widgets to it as well.

    The icons in the main toolbar are 18×18 pixels. However, a 1px padding is expected, so the 16×16 pixel icons you should be using for the main toolbar in modern versions of Firefox will work without any changes. Icons are 32×32 pixels in the menu panel and also during customization. So, if you have an add-on that adds a toolbar button to the main toolbar using the usual guidelines of overlaying the button to the palette and then adding it to the toolbar using JS on first run, everything should work the same and you should only change your CSS to something like this:

    /* Original CSS */
    #my-button {
      list-style-image: url(“chrome://my-extension/skin/icon16.png”);
    }
     
    /* Added for Australis support */
    #my-button[cui-areatype="menu-panel"],
      toolbarpaletteitem[place="palette"] > #my-button {
      list-style-image: url(“chrome://my-extension/skin/icon32.png”);
    }

    Note that buttons in the Australis theme have the cui-areatype attribute set when placed in in the UI. The possible values are menu-panel and toolbar. You can use the toolbar value to have different style for the button in Australis and non-Australis themes.

    Australis for Add-on Developers: Part 1 contains more details.

    New Customization API

    Another exciting addition to Australis is the ability to create toolbar widgets using the CustomizableUI module. You will be able to easily create simple buttons and more interesting widgets with very little code, both for restartless and more conventional add-ons. Here’s a sample:

    CustomizableUI.createWidget({
        id : "aus-hello-button",
        defaultArea : CustomizableUI.AREA_NAVBAR,
        label : "Hello Button",
        tooltiptext : "Hello!",
        onCommand : function(aEvent) {
          let win = aEvent.target.ownerDocument.defaultView;
     
          win.alert("Hello!");
        }
    });

    Australis for Add-on Developers: Part 2 demonstrates how to leverage this API with two demos and plenty of code to play with.

    Get Started!

  9. The Translation of the Firetext App

    The History

    Firetext is an open-source word processor. The project was started in early 2013 by @HRanDEV, @logan-r, and me (@Joshua-S). The goal of the project was to provide a user-friendly editing experience, and to fill a major gap in functionality on Firefox OS.

    Firetext 0.3Firetext 0.3

    In the year since its initiation, Firetext became one of the top ten most popular productivity apps in the Firefox Marketplace. We made a myriad of new additions, and Firetext gained Dropbox integration, enabling users to store documents in the cloud. We also added Night Mode, a feature that automatically adjusts the interface to the surrounding light levels. There was a new interface design, better performance, and web activity support.

    Even with all of these features, Firetext’s audience was rather small. We had only supported the English language, and according to Exploredia, only 17.65% of the world’s population speak English fluently. So, we decided to localize Firetext.

    The Approach

    After reading a Hacks post about Localizing Firefox Apps, we determined to use a combination of webL10n and Google Translate as our localization tools. We decided to localize in the languages known by our contributors (Spanish and German), and then use Google Translate to do the others. Eventually, we planned to grow a community that could contribute translations, instead of just relying on the often erratic machine translations.

    The Discovery

    A few months passed, and still no progress. The task was extremely daunting, and we did not know how to proceed. This stagnation continued until I stumbled upon a second Hacks post, Localizing the Firefox OS Boilerplate App.

    It was like a dream come true. Mozilla had started a program to help smaller app developers with the localization process. We could benefit from their larger contributor pool, while helping them provide a greater number of apps to foreign communities.

    I immediately contacted Mozilla about the program, and was invited to set up a project on Transifex. The game was on!

    The Code

    I started by creating a locales directory that would contain our translation files. I created a locales.ini file in that directory to show webL10n where to find the translations. Finally, I added a folder for each locale.

    locales.ini - Firetext

    I then tagged each translatable element in the html files with a data-l10n-id attribute, and localized alert()s and our other scripted notifications by using webL10n’s document.webL10n.get() or _() function.

    It was time to add the translations. I created a app.properties file in the locales/en_US directory, and referenced it from locales.ini. After doing that, I added all of the strings that were supposed to be translated.

    app.properties - Firetext

    webL10n automatically detects the user’s default locale, but we also needed to be able to change locales manually. To allow this, I added a select in the Firetext settings panel that contained all of the prospective languages.Settings - Firetext

    Even after all of this, Firetext was not really localized; we only had an English translation. This is where Transifex comes into the picture.

    The Translation

    I created a project for Firetext on Transifex, and then added a team for each language on our GitHub issue. I then uploaded the app.properties file as a resource.

    I also uploaded the app description from our manifest.webapp for translation as a separate resource.

    Firetext on Transifex

    Within hours, translations came pouring in. Within the first week, Hebrew, French, and Spanish were completely translated! I added them to our GitHub repository by downloading the translation properties file, and placing it in the appropriate locale directory. I then enabled that language in the settings panel. The entire process was extremely simple and speedy.

    The Submission

    Now that Firetext had been localized, I needed to submit it back to the Mozilla Marketplace.  This was a fairly straight forward process; just download the zip, extract git files, and add in the API key for our error reporting system.

    In less than one day, Firetext was approved, and made available for our global user base.  Firetext is now available in eight different languages, and I can’t wait to see the feedback going forward!

    The Final Thoughts

    In retrospect, probably the most difficult part of localizing Firetext was supporting RTL (Right To Left) languages.  This was a bit of a daunting task, but the results have totally been worth the effort!  All in all, localization was one of the easiest features that we have implemented.

    As Swiss app developer Gerard Tyedmers, creator of grrd’s 4 in a Row and grrd’s Puzzle, said:

    “…I can say that localizing apps is definitely worth the work. It really helps finding new users.

    The l10n.js solution was a very useful tool that was easy to implement. And I am very happy about the fact that I can add more languages with a very small impact on my code…”

    I couldn’t agree more!

    Firetext Editor - Spanish

    Editor’s Note: The Invitation

    Have a great app like Firetext?  You’re invited too!  We encourage you to join Mozilla’s app localization project on Transifex. With a localized app, you can extend your reach to include users from all over the world, and by so doing, help to support a global base of open web app users.

    For translators, mobile app localization presents some interesting translation and interface design challenges. You’ll need to think of the strings you’re working with in mobile scale, as interaction elements on a small screen. The localizer plays an important role in creating an interface that people in different countries can easily use and understand.  Please, get involved with Firetext or one of our other projects.

    This project is just getting started, and we’re learning as we go. If you have questions or issues not addressed by existing resources such the Hacks blog series on app localization, Transifex help pages, and other articles and repos referenced above, you can contact us. We’ll do our best to point you in the right direction. Thanks!

  10. Lessons learnt building ViziCities

    Just over 2 weeks ago Peter Smart and Robin Hawkes released the first version of ViziCities to the world. It’s free to use and open-sourced under an MIT license.

    In this post I will talk to you about the lessons learnt during the development of ViziCities. From application architecture to fine-detailed WebGL rendering improvements, we learnt a lot in the past year and we hope that by sharing our experiences we can help others avoid the same mistakes.

    What is ViziCities?

    In a rather geeky nutshell, ViziCities is a WebGL application that allows you to visualise anywhere in the world in 3D. It’s primary purpose is to look at urban areas, though it’ll work perfectly fine in other places if they have decent OpenStreetMap coverage.

    Demo

    The best way to explain what ViziCities does is to try it out yourself. You’ll need a WebGL-enabled browser and an awareness that you’re using pre-alpha quality software. Click and drag your way around, zoom in using the mouse wheel, and rotate the camera by clicking the mouse wheel or holding down shift while clicking and dragging.

    You can always take a look at this short video if you’re unable to try the demo right now:

    What’s the point of it?

    We started the project for multiple reasons. One of those reasons is that it’s an exciting technical and design challenge for us – both Peter and I thrive on pushing ourselves to the limits by exploring something unknown to us.

    Another reason is that we were inspired by the latest SimCity game and how they visualise data about the performance of your city – in fact, Maxis, the developers behind SimCity reached out to us to tell us how much they like the project!

    There’s something exciting about creating a way to do that for real-world cities rather than fictional ones. The idea of visualising a city in 3D with up-to-date data about that city overlaid is an appealing one. Imagine if you could see census data about your area, education data, health data, crime data, property information, live transport (trains, buses, traffic, etc), you’d be able to learn and understand so much more about the place you live.

    This is really just the beginning – the possibilities are endless.

    Why 3D?

    A common question we get is “Why 3D?” – the short answer, beyond “because it’s a visually interesting way of looking at a city”, is that 3D allows you to do things and analyse data in ways that you can’t do in a 2D map. For example by using 3D you can take height and depth into consideration, so you can better visualise the sheer volume of stuff that lies above and below you in a city – things like pipes and underground tunnels, or bridges, overpasses, tall buildings, the weather, and planes! On a 2D map, looking at all of this would be a confusing mess, in 3D you get to see it exactly how it would look in the real world – you can easily see how objects within a city relate to each other.

    Core technology

    At the most basic level ViziCities is built using Three.js, a WebGL library that abstracts all the complexity of 3D rendering in the browser. We use a whole range of other technologies too, like Web Workers, each of which serves a specific purpose or solves a specific problem that we’ve encountered along the way.

    Let’s take a look at some of those now.

    Lessons learnt

    Over the past year we’ve come from knowing practically nothing about 3D rendering and geographic data visualisation, to knowing at least enough about each of them to be dangerous. Along the way we’ve hit many roadblocks and have had to spend a significant amount of time working out what’s wrong and coming up with solutions to get around them.

    The process of problem solving is one I absolutely thrive on, but it’s not for everybody and I hope that the lessons I’m about to share will help you avoid these same problems, allowing you to save time and do more important things with your life.

    These lessons are in no particular order.

    Using a modular, decoupled application architecture pays off in the long run

    We originally started out with hacky, prototypal experiments that were dependency heavy and couldn’t easily be pulled apart and used in other experiments. Although it allowed us to learn how everything worked, it was a mess and caused a world of pain when it came to building out a proper application.

    In the end we re-wrote everything based on a simple approach using the Constructor Pattern and the prototype property. Using this allowed us to separate out logic into decoupled modules, making everything a bit more understandable whilst also allowing us to extend and replace functionality without breaking anything else (we use the Underscore _.extend method to extend objects).

    Here’s an example of our use of the Constructor Pattern.

    To communicate amongst modules we use the Mediator Pattern. This allows us to keep things as decoupled as possible as we can publish events without having to know about who is subscribing to them.

    Here’s an example of our use of the Mediator Pattern:

    /* globals window, _, VIZI */
    (function() {
      "use strict";
     
      // Apply to other objects using _.extend(newObj, VIZI.Mediator);
      VIZI.Mediator = (function() {
        // Storage for topics that can be broadcast or listened to
        var topics = {};
     
        // Subscribe to a topic, supply a callback to be executed
        // when that topic is broadcast to
        var subscribe = function( topic, fn ){
          if ( !topics[topic] ){ 
            topics[topic] = [];
          }
     
          topics[topic].push( { context: this, callback: fn } );
     
          return this;
        };
     
        // Publish/broadcast an event to the rest of the application
        var publish = function( topic ){
          var args;
     
          if ( !topics[topic] ){
            return false;
          } 
     
          args = Array.prototype.slice.call( arguments, 1 );
          for ( var i = 0, l = topics[topic].length; i &lt; l; i++ ) {
     
            var subscription = topics[topic][i];
            subscription.callback.apply( subscription.context, args );
          }
          return this;
        };
     
        return {
          publish: publish,
          subscribe: subscribe
        };
      }());
    }());

    I’d argue that these 2 patterns are the most useful aspects of the new ViziCities application architecture – they have allowed us to iterate quickly without fear of breaking everything.

    Using promises instead of wrestling with callbacks

    Early on in the project I was talking to my friend Hannah Wolfe (Ghost’s CTO) about how annoying callbacks are, particularly when you want to load a bunch of stuff in order. It didn’t take Hannah long to point out how stupid I was being (thanks Hannah) and that I should be using promises instead of wrestling with callbacks. At the time I brushed them off as another hipster fad but in the end she was right (as always) and from that point onwards I used promises wherever possible to take back control of application flow.

    For ViziCities we ended up using the Q library, though there are plenty others to choose from (Hannah uses when.js for Ghost).

    The general usage is the same whatever library you choose – you set up promises and you deal with them at a later date. However, the beauty comes when you want to queue up a bunch of tasks and either handle them in order, or do something when they’re all complete. We use this in a variety of places, most noticeably when loading ViziCities for the first time (also allowing us to output a progress bar).

    I won’t lie, promises take a little while to get your head around but once you do you’ll never look back. I promise (sorry, couldn’t resist).

    Using a consistent build process with basic tests

    I’ve never been one to care too much about process, code quality, testing, or even making sure things are Done Right™. I’m a tinkerer and I much prefer learning and seeing results than spending what feels like wasted time on building a solid process. It turns out my tinkering approach doesn’t work too well for a large Web application which requires consistency and robustness. Who knew?

    The first step for code consistency and quality was to enable strict mode and linting. This meant that the most glaring of errors and inconsistencies were flagged up early on. As an aside, due to our use of the Constructor Pattern we wrapped each module in an anonymous function so we could enable strict mode per module without necessarily enabling it globally.

    At this point it was still a faff to use a manual process for creating new builds (generating a single JavaScript file with all the modules and external dependencies) and to serve the examples. The break-through was adopting a proper build system using Grunt, thanks mostly to a chat I had with Jack Franklin about my woes at an event last year (he subsequently gave me his cold which took 8 weeks to get rid of, but it was worth it).

    Grunt allows us to run a simple command in the terminal to do things like automatically test, concatenate and minify files ready for release. We also use it to serve the local build and auto-refresh examples if they’re open in a browser. You can look at our Grunt setup to see how we set everything up.

    For automated testing we use Mocha, Chai, Sinon.js, Sinon-Chai and PhantomJS. Each of which serves a slightly different purpose in the testing process:

    • Mocha is used for the overall testing framework
    • Chai is used as an assertion library to allows you to write readable tests
    • Sinon.js is used to fake application logic and track behaviour through the testing process
    • PhantomJS is used to run client-side tests in a headless browser from the terminal

    We’ve already put together some (admittedly basic) tests and we plan to improve and increase the test coverage before releasing 0.1.0.

    Travis CI is used to make sure we don’t break anything when pushing changes to GitHub. It automatically performs linting and runs our tests via Grunt when changes are pushed, including pull requests from other contributors (a life saver). Plus it lets you have a cool badge to put on your GitHub readme that shows everyone whether the current version is building successfully.

    Together, these solutions have made ViziCities much more reliable than it has ever been. They also mean that we can move rapidly by building automatically, and they allow us to not have to worry so much about accidentally breaking something. The peace of mind is priceless.

    Monitoring performance to measure improvements

    General performance in frames-per-second can be monitored using FPSMeter. It’s useful for debugging parts of the application that are locking up the browser or preventing the rendering loop from running at a fast pace.

    You can also use the Three.js renderer.info property to monitor what you’re rendering and how it changes over time.

    It’s worth keeping an eye on this to make sure objects are not being rendered when they move out of the current viewport. Early on in ViziCities we had a lot of issues with this not happening and the only way to be sure we to monitor these values.

    Turning geographic coordinates into 2D pixel coordinates using D3.js

    One of the very first problems we encountered was how to turn geographic coordinates (latitude and longitude) into pixel-based coordinates. The math involved to achieve this isn’t simple and it gets even more complicated if you want to consider different geographic projections (trust me, it gets confusing fast).

    Fortunately, the D3.js library has already solved these problems for you, specifically within its geo module. Assuming you’ve included D3.js, you can convert coordinates like so:

    var geoCoords = [-0.01924, 51.50358]; // Central point as [lon, lat]
    var tileSize = 256; // Pixel size of a single map tile
    var zoom = 15; // Zoom level
     
    var projection = d3.geo.mercator()
      .center(geoCoords) // Geographic coordinates of map centre
      .translate([0, 0]) // Pixel coordinates of .center()
      .scale(tileSize &lt;&lt; zoom); // Scaling value
     
    // Pixel location of Heathrow Airport to relation to central point (geoCoords)
    var pixelValue = projection([-0.465567112, 51.4718071791]); // Returns [x, y]

    The scale() value is the hardest part of the process to understand. It basically changes the pixel value that’s returned based on how zoomed in you want to be (imagine zooming in on Google Maps). It took me a very long time to understand so I detailed how scale works in the ViziCities source code for others to learn from (and so I can remember!). Once you nail the scaling then you will be in full control of the geographic-to-pixel conversion process.

    Extruding 2D building outlines into 3D objects on-the-fly

    While 2D building outlines are easy to find, turning them into 3D objects turned out to be not quite as easy as we imagined. There’s currently no public dataset containing 3D buildings, which is a shame though it makes it more fun to do it yourself.

    What we ended up using was the THREE.ExtrudeGeometry object, passing in a reference to an array of pixel points (as a THREE.Shape object) representing a 2D building footprint.

    The following is a basic example that would extrude a 2D outline into a 3D object:

    var shape = new THREE.Shape();
    shape.moveTo(0, 0);
    shape.lineTo(10, 0);
    shape.lineTo(10, 10);
    shape.lineTo(0, 10);
    shape.lineTo(0, 0); // Remember to close the shape
     
    var height = 10;
    var extrudeSettings = { amount: height, bevelEnabled: false };
     
    var geom = new THREE.ExtrudeGeometry( shape, extrudeSettings );
    var mesh = new THREE.Mesh(geom);

    What turned out interesting was how it actually turned out quicker to generate 3D objects on-the-fly than to pre-render them and load them in. This was mostly due to the fact it would take longer to download a pre-rendered 3D object than downloading the 2D coordinates string and generating it at runtime.

    Using Web Workers to dramatically increase performance and prevent browser lockup

    One thing we did notice with the generation of 3D objects was that it locked up the browser, particularly when processing a large number of shapes at the same time (you know, like an entire city). To work around this we delved into the magical world of Web Workers.

    Web Workers allow you to run parts of your application in a completely separate processor thread to the browser renderer, meaning anything that happens in the Web Worker thread won’t slow down the browser renderer (ie. it won’t lock up). It’s incredibly powerful but it can also be incredibly complicated to get working as you want it to.

    We ended up using the Catiline.js Web Worker library to abstract some of the complexity and allow us to focus on using Web Workers to our advantage, rather than fighting against them. The result is a Web Worker processing script that’s passed 2D coordinate arrays and returns generated 3D objects.

    After getting this working we noticed that while the majority of browser lock-ups were eliminated, there were two new lock-ups introduced. Specifically, there was a lock-up when the 2D coordinates were passed into the Web Worker scripts, and another lock-up when the 3D objects were returned back to the main application.

    The solution to this problem came from the inspiring mind of Vladimir Agafonkin (of LeafletJS fame). He helped me understand that to avoid the latter of the lock-ups (passing the 3D objects back to the application) I needed to use transferrable objects), namely ArrayBuffer objects. Doing this allows you to effectively transfer ownership of objects created within a Web Worker thread to the main application thread, rather than copying them. We implemented this to great effect, eliminating the second lock-up entirely.

    To eliminate the first lock-up (passing 2D coordinates into the Web Worker) we need to take a different approach. The problem lies again with the copying of the data, though in this case you can’t use transferrable objects. The solution instead lies in loading the data into the Web Worker script using the importScripts method. Unfortunately, I’ve not yet worked out a way to do this with dynamic data sourced from XHR requests. Still, this is definitely a solution that would work.

    Using simplify.js to reduce the complexity of 2D shapes before rendering

    Something we found early on was that complex 2D shapes caused a lot of strain when rendered as 3D objects en-masse. To get around this we use Vladimir Agafonkin’s simplify.js library to reduce the quality of 2D shapes before rendering.

    It’s a great little tool that allows you to keep the general shape while dramatically reducing the number of points used, thus reducing its complexity and render cost. By using this method we could render many more objects with little to no change in how the objects look.

    Getting accurate heights for buildings is really difficult

    One problem we never imagined encountering was getting accurate height information for buildings within a city. While the data does exist, it’s usually unfathomably expensive or requires you to be in education to get discounted access.

    The approach we went for uses accurate height data from OpenStreetMap (if available), falling back to a best-guess that uses the building type combined with 2D footprint area. In most cases this will give a far more accurate guess at the height than simply going for a random height (which is how we originally did it).

    Restricting camera movement to control performance

    The original dream with ViziCities was to visualise an entire city in one go, flying around, looking down on the city from the clouds like some kind of God. We fast learnt that this came at a price, a performance price, and a data-size price. Neither of which we were able to afford.

    When we realised this wasn’t going to be possible we looked at how to approach things from a different angle. How can you feel like you’re looking at an entire city without rendering an entire city? The solution was deceptively simple.

    By restricting camera movement to only view a small area at a time (limiting zoom and rotation) you’re able to have much more control over how many objects can possibly be in view at one time. For example, if you prevent someone from being able to tilt a camera to look at the horizon then you’ll never need to render every single object between the camera and the edge of your scene.

    This simple approach means that you can go absolutely anywhere in the world within ViziCities, whether a thriving metropolis or a country-side retreat, and not have to change the way you handle performance. Every situation is predictable and therefore optimisable.

    Tile-based batching of objects to improve loading and rendering performance

    Another approach we took to improve performance was by splitting the entire world into a grid system, exactly like how Google and other map providers do things. This allows you to load data in small chunks that eventually build up to a complete image.

    In the case of ViziCities, we use the tiles to only request JSON data for the geographic area visible to you. This means that you can start outputting 3D objects as each tile loads rather than waiting for everything to load.

    A by-product of this approach is that you get to benefit from frustum culling, which is when objects not within your view are not rendered, dramatically increasing performance.

    Caching of loaded data to save on time and resources when viewing the same location

    Coupled with the tile-based loading is a caching system that means that you don’t request the same data twice, instead pulling the data from a local store. This saves bandwidth but also saves time as it can take a while to download each JSON tile.

    We currently use a dumb local approach that resets the cache on each refresh, but we plan to implement something like localForage to have the cache persist between browser sessions.

    Using the OpenStreetMap Overpass API rather than rolling your own PostGIS database

    Late into the development of ViziCities we realised that it was unfeasible to continue using our own PostGIS database to store and manipulate geographic data. For one, it would require a huge server just to store the entirety of OpenStreetMap in a database, but really it was just a pain to set up and manage and an external approach was required.

    The solution came in the shape of the Overpass API, an external JSON and XML endpoint to OpenStreetMap data. Overpass allows you to send a request for specific OpenStreetMap tags within a bounding box (in our case, a map tile):

    http://overpass-api.de/api/interpreter?data=[out:json];((way(51.50874,-0.02197,51.51558,-0.01099)[%22building%22]);(._;node(w);););out;

    And get back a lovely JSON response:

    {
      "version": 0.6,
      "generator": "Overpass API",
      "osm3s": {
        "timestamp_osm_base": "2014-03-02T22:08:02Z",
        "copyright": "The data included in this document is from www.openstreetmap.org. The data is made available under ODbL."
      },
      "elements": [
     
    {
      "type": "node",
      "id": 262890340,
      "lat": 51.5118466,
      "lon": -0.0205134
    },
    {
      "type": "node",
      "id": 278157418,
      "lat": 51.5143963,
      "lon": -0.0144833
    },
    ...
    {
      "type": "way",
      "id": 50258319,
      "nodes": [
        638736123,
        638736125,
        638736127,
        638736129,
        638736123
      ],
      "tags": {
        "building": "yes",
        "leisure": "sports_centre",
        "name": "The Workhouse"
      }
    },
    {
      "type": "way",
      "id": 50258326,
      "nodes": [
        638736168,
        638736170,
        638736171,
        638736172,
        638736168
      ],
      "tags": {
        "building": "yes",
        "name": "Poplar Playcentre"
      }
    },
    ...
      ]
    }

    The by-product of this was that you get worldwide support out of the box and benefit from minutely OpenSteetMap updates. Seriously, if you edit or add something to OpenStreetMap (please do) it will show up in ViziCities within minutes.

    Limiting the number of concurrent XHR requests

    Something we learnt very recently was that spamming the Overpass API endpoint with a tonne of XHR requests at the same time wasn’t particularly good for us nor for Overpass. It generally caused delays as Overpass rate-limited us so data took a long time to make its way back to the browser. The great thing was that by already using promises to manage the XHR requests we were half-way ready to solve the problem.

    The final piece of the puzzle is to use throat.js to limit the number of concurrent XHR requests so we can take control and load resources without abusing external APIs. It’s beautifully simple and worked perfectly. No more loading delays!

    Using ViziCities in your own project

    I hope that these lessons and tips have helped in some way, and I hope that it encourages you to try out ViziCities for yourself. Getting set up is easy and well documented, just head to the ViziCities GitHub repo and you’ll find everything you need.

    Contributing to ViziCities

    Part of the reason why we opened up ViziCities was to encourage other people to help build it and make it even more awesome than Peter and I could ever make it. Since launch, we’ve had over 1,000 people favourite the project on GitHub, as well as nearly 100 forks. More importantly, we’ve had 9 Pull Requests from members of the community who we didn’t previously know and who we’ve not asked to help. It’s such an amazing feeling to see people help out like that.

    If we were to pick a favourite contribution so far, it would be adding the ability to load anywhere in the world by putting coordinates in the URL. Such a cool feature and one that has made the project much more usable for everyone.

    We’d love to have more people contribute, whether dealing with issues or playing with the visual styling. Read more about how to contribute and give it a go!

    What’s next?

    It’s been a crazy year and an even crazier fortnight since we launched the project. We never imagined it would excite people in the way it has, it’s blown us away.

    The next steps are slowly going through the issues and getting ready for the 0.1.0 release, which will still be alpha quality but will be sort of stable. Aside from that we’ll continue experimenting with exciting new technologies like the Oculus Rift (yes, that’s me with one strapped to my face)…

    Visualising realtime air traffic in 3D…

    And much, much more. Watch this space.