Mozilla

Featured Articles

Sort by:

View:

  1. Songs of Diridum: Pushing the Web Audio API to Its Limits

    When we at Goo Technologies heard that the Web Audio API would be supported in an upcoming version of Mozilla Firefox, we immediately started brainstorming about what we could build with that.

    We started discussing the project with the game developers behind “Legend of Diridum” (see below) and came up with the idea of a small market place and a small jazz band on a stage. The feeling we wanted to capture was that of a market place coming to life. The band is playing a song to warm up and the crowd has not gathered around yet. The evening is warm and the party is about to start.

    We call the resulting demo “Songs of Diridum”.

    What the Web Audio API can do for you

    We will take a brief look at the web audio system from three perspectives. These are game design, audio engineering and programming.

    From a game designer perspective we can use the functionality of the Web Audio API to tune the soundscape of our game. We can run a whole lot of separate sounds simultaneously while also adjusting their character to fit an environment or a game mechanic. You can have muffled sounds coming through a closed door and open the filters for these sounds to unmuffle them gradually as the door opens. In real time. We can add reflecting sounds of the environment to the footsteps of my character as we walk from a sidewalk into a church. The ambient sounds of the street will be echoing around in the church together with my footsteps. We can attach the sounds of a roaring fire to my magicians fireball, hurl it away and hear the fireball moving towards its target. We can hear the siren of a police car approaching and hear how it passes by from the pitch shift known as doppler effect. And we know we can use these features without needing to manage the production of an audio engine. Its already there and it works.

    From an audio engineering perspective we view the Web Audio API as a big patch bay with a slew of outboard gear tape stations and mixers. On a low level we feel reasonably comfortable with the fundamental aspects of the system. We can work comfortably with changing the volume of a sound while it is playing without running the risk of inducing digital distortion from the volume level changing from one sample to another. The system will make the interpolation needed for this type of adjustment. We can also build the type of effects we want and hook them up however we want. As long as we keep my system reasonably small we can make a nice studio with the Web Audio API.

    From a programmer perspective we can write the code needed for our project with ease. If we run into a problem we will usually find a good solution to it on the web. We don’t have to spend our time with learning how to work with some poorly documented proprietary audio engine. The type of problem we will be working with the most is probably related to how the code is structured. We will be figuring out how to handle the loading of the sounds and which sounds to load when. How to provide these sounds to the game designer through some suitable data structure or other design pipelines. We will also work with the team to figure out how to handle the budgeting of the sound system. How much data can we use? How many sounds can we play at the same time? How many effects can we use on the target platform? It is likely that the hardest problem, the biggest technical risk, is related to handling the diversity of hardware and browsers running on the web.

    About Legend of Diridum

    This sound demo called “Songs of Diridum” is actually a special demo based on graphics and setting from the upcoming game “LOD: Legend of Diridum”. The LOD team is led by the game design veteran Michael Stenmark.

    LOD is an easy to learn, user-friendly fantasy role playing game set on top of a so called sandbox world. It is a mix of Japanese fantasy and roleplaying games drawing inspiration from Grandia, Final Fantasy, Zelda and games like Animal Crossing and Minecraft.

    The game is set in a huge fantasy world, in the aftermath of a terrible magic war. The world is haunted by the monsters, ghosts and undead that was part of the warlocks armies and the player starts the game as the Empire’s official ghost hunter to cleanse the lands of the evil and keep the people of Diridum safe. LOD is built in the Goo Engine and can be played in almost any web browser without the need download anything.

    About the music

    The song was important to set the mood and to embrace the warm feeling of a hot summer night turning into a party. Adam Hagstrand, the composer, nailed it right away. We love the way he got it very laid back, jazzy. Just that kind of tune a band would warm up with before the crowd arrives.

    Quickly building a 3D game with Goo Engine

    We love the web, and we love HTML5. HTML5 runs in the browser on multiple devices and does not need any special client software to be downloaded and installed. This allows for games to be published on nearly every conceivable web site, and since it runs directly in the browser, it opens up unprecedented viral opportunities and social media integration.

    We wanted to build Songs of Diridum as a HTML5 browser game, but how to do that in 3D? The answer was WebGL. WebGL is a new standard in HTML5 that allows games to gain access to hardware acceleration, just like native games. The introduction of WebGL in HTML5 is a massive leap in what the browser can deliver and it allows for web games to be built with previously unseen graphics quality. WebGL powered HTML5 does not require that much bandwidth during gameplay. Since the game assets are downloaded (pre-cached) before and during gameplay, even modest speed internet connections suffice.

    But building a WebGL game from scratch is a massive undertaking. The Goo Platform from Goo Technologies is the solution for making it much easier to build WebGL games and applications. In November, Goo Create is released making it even more accessible and easy to use.

    Goo is a HTML5 and WebGL based graphics development platform capable of powering the next generation of web games and apps. From the ground up, it’s been built for amazing graphics smoothness and performance while at the same time making things easy for graphics creators. Since it’s HTML5, it enables creators to build and distribute advanced interactive graphics on the web without the need for special browser plugins or software downloads. With Goo you have the power to publish hardware accelerated games & apps on desktops, laptops, smart TVs, tablets or mobile devices. It gives instant access to smooth rich graphics and previously unimagined browser game play.

    UPDATE: Goo Technologies has just launched their interactive 3D editor, Goo Create, which radically simplifies the process of creating interactive WebGL graphics.

    Building Songs of Diridum

    We built this demo project with a rather small team working for a relatively short time. In total we have had about seven or so people involved with the production. Most team members have done sporadic updates to our data and code. Roughly speaking we have not been following any conventional development process but rather tried to get as good a result as we can without going into any bigger scale production.

    The programming of the demo has two distinct sides. One for building the world and one for wiring up the sound system. Since the user interface primarily is used for controlling the state of the sound system we let the sound system drive the user interface. It’s a simple approach but we also have a relatively simple problem to solve. Building a small 3D world like this one is mostly a matter of loading model data to various positions in 3D space. All the low level programming needed to get the scene to render with the proper colors and shapes is handled by the Goo Engine so we have not had to write any code for those parts.

    We defined a simple data format for adding model data to the scene, we also included the ability to add sounds to the world and some slightly more complex systems such as the animated models and the bubbly water sound effect.

    The little mixer panel in which you can play around with to change the mix of the jazz band is dynamically generated by the sound system:

    Since we expected this app to be used on touch screen devices we also decided to only use clickable buttons for interface. We would not have enough time to test the usability of any other type of controls when aiming at a diffuse collection of mobile devices.

    Using Web Audio in a 3D world

    To build the soundscape of a 3D world we have access to spatialization of sound sources and the ears of the player. The spatial aspects of sounds and listeners are boiled down to position, direction and velocity. Each sound can also be made to emit its sound in a directional cone, in short this emulates the difference between the front and back of loudspeaker. A piano would not really need any directionality as it sounds quite similar in all directions. A megaphone on the other hand is comparably directional and should be louder at the front than at the back.

    if (is3dSource) {
        // 3D sound source
        this.panNode = this.context.createPanner();
        this.gainNode.connect(this.panNode);
        this.lastPos = [0,0,0];
        this.panNode.rolloffFactor = 0.5;
    } else {
        // Stereo sound source “hack”
        this.panNode = mixNodeFactory.buildStereoChannelSplitter(this.gainNode, context);
        this.panNode.setPosition(0, this.context.currentTime);
    }
    

    The position of the sound is used to determine panning or which speaker the sound is louder in and how loud the sound should be.

    The velocity of the sound and the listener together with their positions provide the information needed to doppler shift all sound sources in the world accordingly. For other worldly effects such as muffling sounds behind a door or changing the sound of the room with reverbs we’ll have to write some code and configure processing nodes to meet with our desired results.

    For adding sounds to the user interface and such direct effects, we can hook the sounds up without going through the spatialization, which makes them a bit simpler. You can still process the effects and be creative if you like. Perhaps pan the sound source based on where the mouse pointer clicked.

    Initializing Web Audio

    Setting up for using Web Audio is quite simple. The tricky parts are related to figuring out how much sound you need to preload and how much you feel comfortable with loading later. You also need to take into account that loading a sound contains two asynchronous and potentially slow operations. The first is the download, the second is the decompression from some small format such as OGG or MP3 to arraybuffer. When developing against a local machine you’ll find that the decompression is a lot slower than the download and as with download speeds in general we can expect to not know how much time this will require for any given user.

    Playing sound streams

    Once you have a sound decompressed and ready it can be used to create a sound source. A sound source is a relatively low level object which streams its sound data at some speed to its selected target node. For the simplest system this target node is the speaker output. Even with this primitive system, you can already manipulate the playback rate of the sound, this changes its pitch and duration. There is a nice feature in the Web Audio API which allows you to adjust the behaviour of interpolating a change like this to fit your desires.

    Adding sound effects: reverb, delay, and distortion

    To add an effect to our simple system you put a processor node between the source and the speakers. us audio engineers wants to split the source to have a “dry” and a “wet” component at this point. The dry component is the non-processed sound and the wet is processed. Then you’ll send both the wet and the dry to the speakers and adjust the mix between them by adding a gain node on each of these tracks. Gain is an engineery way of saying “volume”. You can keep on like this and add nodes between the source and the speakers as you please. Sometimes you’ll want effects in parallel and sometimes you want them serially. When coding your system its probably a good idea to make it easy to change how this wiring is hooked up for any given node.

    Conclusion

    We’re quite happy with how “Songs of Diridum” turned out, and we’re impressed with the audio and 3D performance available in today’s web browsers. Hopefully the mobile devices will catch up soon performance-wise, and HTML5 will become the most versatile cross-platform game environment available.
    Now go play “Songs of Diridum”!

  2. Introducing TogetherJS

    What is TogetherJS?

    We’d like to introduce TogetherJS, a real-time collaboration tool out of Mozilla Labs.

    TogetherJS is a service you add to an existing website to add real-time collaboration features. Using the tool two or more visitors on a website or web application can see each other’s mouse/cursor position, clicks, track each other’s browsing, edit forms together, watch videos together, and chat via audio and WebRTC.

    Some of the features TogetherJS includes:

    • See the other person’s cursor and clicks
    • See scroll position
    • Watch the pages a person visits on a page
    • Text chat
    • Audio chat using WebRTC
    • Form field synchronization (text fields, checkboxes, etc)
    • Play/pause/track videos in sync
    • Continue sessions across multiple pages on a site

    How to integrate

    Many of TogetherJS’s features require no modification of your site. TogetherJS looks at the DOM and determines much of what it should do that way – it detects the form fields, detects some editors like CodeMirror and Ace, and injects its toolbar into your page.

    All that’s required to try TogetherJS out is to add this to your page:

    <script src="https://togetherjs.com/togetherjs.js"></script>

    And then create a button for your users to start TogetherJS:

    <button id="collaborate" type="button">Collaborate</button>
    <script>
    document.getElementById("collaborate")
      .addEventListener("click", TogetherJS, false);
    </script>

    If you want to see some of what TogetherJS does, jsFiddle has enabled TogetherJS:

    jsfiddle with Collaborate highlighted

    Just click on Collaboration and it will start TogetherJS. You can also use TogetherJS in your fiddles, as we’ll show below.

    Extending for your app

    TogetherJS can figure out some things by looking at the DOM, but it can’t synchronize your JavaScript application. For instance, if you have a list of items in your application that is updated through JavaScript, that list won’t automatically be in sync for both users. Sometimes people expect (or at least hope) that it will automatically update, but even if we did synchronize the DOM across both pages, we can’t synchronize your underlying JavaScript objects. Unlike products like Firebase or the Google Drive Realtime API TogetherJS does not give you realtime persistence – your persistence and the functionality of your site is left up to you, we just synchronize sessions in the browser itself.

    So if you have a rich JavaScript application you will have to write some extra code to keep sessions in sync. We do try to make it easier, though!

    To give an example we’d like to use a simple drawing application. We’ve published the complete example as a fiddle which you can fork and play with yourself.

    A Very Small Drawing Application

    We start with a very simple drawing program. We have a simple canvas:

    <canvas id="sketch"
            style="height: 400px; width: 400px; border: 1px solid #000">
    </canvas>

    And then some setup:

    // get the canvas element and its context
    var canvas = document.querySelector('#sketch');
    var context = canvas.getContext('2d');
     
    // brush settings
    context.lineWidth = 2;
    context.lineJoin = 'round';
    context.lineCap = 'round';
    context.strokeStyle = '#000';

    We’ll use mousedown and mouseup events on the canvas to register our move() handler for the mousemove event:

    var lastMouse = {
      x: 0,
      y: 0
    };
     
    // attach the mousedown, mousemove, mouseup event listeners.
    canvas.addEventListener('mousedown', function (e) {
        lastMouse = {
            x: e.pageX - this.offsetLeft,
            y: e.pageY - this.offsetTop
        };
        canvas.addEventListener('mousemove', move, false);
    }, false);
     
    canvas.addEventListener('mouseup', function () {
        canvas.removeEventListener('mousemove', move, false);
    }, false);

    And then the move() function will figure out the line that needs to be drawn:

    function move(e) {
        var mouse = {
            x: e.pageX - this.offsetLeft,
            y: e.pageY - this.offsetTop
        };
        draw(lastMouse, mouse);
        lastMouse = mouse;
    }

    And lastly a function to draw lines:

    function draw(start, end) {
        context.beginPath();
        context.moveTo(start.x, start.y);
        context.lineTo(end.x, end.y);
        context.closePath();
        context.stroke();
    }

    This is enough code to give us a very simple drawing application. At this point if you enable TogetherJS on this application you will see the other person move around and see their mouse cursor and clicks, but you won’t see drawing. Let’s fix that!

    Adding TogetherJS

    TogetherJS has a “hub” that echoes messages between everyone in the session. It doesn’t interpret messages, and everyone’s messages travel back and forth, including messages that come from a person that might be on another page. TogetherJS also lets the application send their own messages like:

    TogetherJS.send({
      type: "message-type",
      ...any other attributes you want to send...
    })

    to send a message (every message must have a type), and to listen:

    TogetherJS.hub.on("message-type", function (msg) {
      if (! msg.sameUrl) {
        // Usually you'll test for this to discard messages that came
        // from a user at a different page
        return;
      }
    });

    The message types are namespaced so that your application messages won’t accidentally overlap with TogetherJS’s own messages.

    To synchronize drawing we’d want to watch for any lines being drawn and send those to the other peers:

    function move(e) {
        var mouse = {
            x: e.pageX - this.offsetLeft,
            y: e.pageY - this.offsetTop
        };
        draw(lastMouse, mouse);
        if (TogetherJS.running) {
            TogetherJS.send({type: "draw", start: lastMouse end: mouse});
        }
        lastMouse = mouse;
    }

    Before we send we check that TogetherJS is actually running (TogetherJS.running). The message we send should be self-explanatory.

    Next we have to listen for the messages:

    TogetherJS.hub.on("draw", function (msg) {
        if (! msg.sameUrl) {
            return;
        }
        draw(msg.start, msg.end);
    });

    We don’t have to worry about whether TogetherJS is running when we register this listener, it can only be called when TogetherJS is running.

    This is enough to make our drawing live and collaborative. But there’s one thing we’re missing: if I start drawing an image, and you join me, you’ll only see the new lines I draw, you won’t see the image I’ve already drawn.

    To handle this we’ll listen for the togetherjs.hello message, which is the message each client sends when it first arrives at a new page. When we see that message we’ll send the other person an image of our canvas:

    TogetherJS.hub.on("togetherjs.hello", function (msg) {
        if (! msg.sameUrl) {
            return;
        }
        var image = canvas.toDataURL("image/png");
        TogetherJS.send({
            type: "init",
            image: image
        });
    });

    Now we just have to listen for this new init message:

    TogetherJS.hub.on("init", function (msg) {
        if (! msg.sameUrl) {
            return;
        }
        var image = new Image();
        image.src = msg.image;
        context.drawImage(image, 0, 0);
    });

    With just a few lines of code TogetherJS let us make a live drawing application. Of course we had to do some of the code, but here’s some of the things TogetherJS handles for us:

    • Gives users a URL to share with another user to start the session Screenshot of invitation window
    • Establishes a WebSocket connection to our hub server, which echoes messages back and forth between clients
    • Let’s users set their name and avatar, and see who else is in the session Screenshot of avatar/name setting
    • Keeps track of who is available, who has left, and who is idle
    • Simple but necessary features like text chat are available Screenshot of chat window
    • Session initialization and tracking is handled by TogetherJS

    Some of the things we didn’t do in this example:

    • We used a fixed-size canvas so that we didn’t have to deal with two clients and two different resolutions. Generally TogetherJS handles different kinds of clients and using resolution-independent positioning (and even works with responsive design). One approach to fix this might be to ensure a fixed aspect ratio, and then use percentages of the height/width for all the drawing positions.
    • We don’t have any fun drawing tools! Probably you wouldn’t want to synchronize the tools themselves – if I’m drawing with a red brush, there’s no reason you can’t be drawing with a green brush at the same time.
    • But something like clearing the canvas should be synchronized.
    • We don’t save or load any drawings. Once the drawing application has save and load you may have to think more about what you want to synchronize. If I have created a picture, saved it, and then return to the site to join your session, will your image overwrite mine? Putting each image at a unique URL will make it clearer whose image everyone is intending to edit.

    Want To Look At More?

    • Curious about the architecture of TogetherJS? Read the technology overview.
    • Try TogetherJS out on jsFiddle
    • Find us via the button in the documentation: “Get Live Help” which will ask to start a TogetherJS session with one of us.
    • Find us on IRC in #togetherjs on irc.mozilla.org.
    • Find the code on GitHub, and please open an issue if you see a bug or have a feature request. Don’t be shy, we are interested in lots of kinds of feedback via issues: ideas, potential use cases (and challenges coming from those use cases), questions that don’t seem to be answered via our documentation (each of which also implies a bug in our documentation), telling us about potentially synergistic applications.
    • Follow us on Twitter: @togetherjs.

    What kind of sites would you like to see TogetherJS on? We’d love to hear in the comments.

  3. Introducing the Firefox OS App Manager

    The Firefox OS App Manager is a new developer tool available in Firefox 26 that greatly improves the process of building and debugging Firefox OS apps, either in the Simulator or on a connected device. Based on the the Firefox OS Simulator add-on, it bridges the gap between existing Firefox Developer tools and the Firefox OS Simulator, allowing developers to fully debug and deploy their web apps to Firefox OS with ease.

    AppManager

    Additional features made available in Firefox 26 are discussed in this post.

    App Manager In Action

    The App Manager replaces the current Simulator Dashboard and provides an integrated debug and deployment environment for your Firefox OS apps, by leveraging the existing Firefox Developer Tools. You can install hosted or packaged apps and debug them in the Simulator or with a connected device. The App Manager also provides additional information to the developer including the current Firefox OS version of a connected device, the ability to take screenshots, a list of all currently installed apps and a list of all the APIs and what privilege level is required to use each. Here is a screencast of the App Manager showing off some of the features available for Firefox OS Development.

    In addition to debugging your own apps, the App Manager also provides the ability to update, start, stop and debug system level apps. Debugging an app using the Developer Tools is similar to debugging any other Web app and changes made in the Tools are automatically reflected real-time to the app either in the Simulator or the connected device. You can use the Console to see warnings and errors within the app, the Inspector to view and modify the currently loaded HTML and CSS, or debug your JavaScript using the Debugger.

    Developer Tools

    For more information about using the Developer Tools be sure to check out the Developer Tools series on this blog and for the most up to date information take a look at the Developer Tools section of MDN.

    Getting Started with the App Manager

    To get started using the App Manager take a look at the MDN Documentation on Using the The App Manager. Do keep in mind that to see what is shown above you need:

    • Firefox 26 or later
    • Firefox OS 1.2 or later
    • at least the 1.2 version of the Firefox OS Simulator
    • either the ADB SDK or the ADB Helper Add-on

    Details for these are described in the above MDN link.

    Mozilla is very interested in your feedback as that is the best way to make useful tools, so please be sure to reach out to us through Bugzilla or in the comments and let us know what you think about the new App Manager.

  4. Firefox Developer Tools and Firebug

    If you haven’t tried the Firefox Developer Tools in the last 6 months, you owe it to yourself to take another look. Grab the latest Aurora browser, and start up the tools from the Web Developer menu (a submenu of Tools on some platforms).

    The tools have improved a lot lately: black-boxing lets you treat sources as system libraries that won’t distract your debugging flow. Source maps let you debug source generated by transpilers or minimizers. The inspector has paint flashing, a new font panel, and a greatly improved style inspector with tab completion and pseudo-elements. The network monitor helps you debug your network activity. The list goes on, and you can read more about recent developments in our series of Aurora Posts.

    After getting to know the tools, start the App Manager. Install the Firefox OS Simulator to see how your app will behave on a device. If you have a Firefox OS device running the latest 1.2 nightlies, you can connect the tools directly to the phone.

    Why The Built-in Tools?

    The Web owes a lot to Firebug. For a long time, Firebug was the best game in town. It introduced the world to visual highlighting of the DOM, inplace editing of styles, and the console API.

    Before the release of Firefox 4 we decided that Firefox needed a set of high-quality built-in tools. Baking them into the browser let us take advantage of the existing Mozilla community and infrastructure, and building in mozilla-central makes a huge difference when working with Gecko and Spidermonkey developers. We had ambitious platform changes planned: The JSD API that Firebug uses to debug Javascript had aged badly, and we wanted to co-evolve the tools alongside a new Spidermonkey Debugger API.

    We thought long and hard about including Firebug wholesale and considered several approaches to integrating it. An early prototype of the Inspector even included a significant portion of Firebug. Ultimately, integration proved to be too challenging and would have required rewrites that would have been equivalent to starting over.

    How is Firebug Doing?

    Firebug isn’t standing still. The Firebug Working Group continues to improve it, as you can see in their latest 1.12 release. Firebug is working hard to move from JSD to the new Debugger API, to reap the performance and stability benefits we added for the Firefox Developer Tools.

    After that? Jan Odvarko, the Firebug project leader, had this to say:

    Firebug has always been maintained rather as an independent project outside of existing processes and Firefox environment while DevTools is a Mozilla in-house project using standard procedures. Note that the Firebug goal has historically been to complement Firefox features and not compete with them (Firebug is an extension after all) and we want to keep this direction by making Firebug a unique tool.

    Everyone wants to figure out the best way for Firebug’s community of users, developers, and extension authors to shape and complement the Firefox Developer Tools. The Firebug team is actively discussing their strategy here, but hasn’t decided how they want to accomplish that.

    Follow the Firebug blog and @firebugnews account to get involved.

    What’s Next for Firefox Developer Tools?

    We have more exciting stuff coming down the pipe. Some of this will be new feature work, including great performance analysis and WebGL tools. Much of it will be responding to feedback, especially from developers giving the tools a first try.

    We also want to find out what you can add to the tools. Recently the Ember.js Chrome Extension was ported to Firefox Developer Tools as a Sunday hack, but we know there are more ideas out there. Like most Firefox code you can usually find a way to do what you want, but we’re also working to define and publish a Developer Tools API. We want to help developers build high quality, performant developer tools extensions. We’d love to hear from developers writing extensions for Firebug or Chrome Devtools about how to best go about that.

    Otherwise, keep following the Hacks blog to learn more about how the Firefox Developer Tools are evolving. Join in at the dev-developer-tools mailing list, the @FirefoxDevTools Twitter account, or #devtools on irc.mozilla.org.

  5. Reintroducing the Firefox Developer Tools, part 1: the Web Console and the JavaScript Debugger

    This is part one, out of 5, focusing on the built-in Developer Tools in Firefox, their features and where we are now with them. The intention is to show you all the possibilities available, the progress and what we are aiming for.

    Firefox 4 saw the launch of the Web Console, the first of the new developer tools built into Firefox. Since then we’ve been adding more capabilities to the developer tools, which now perform a wide range of functions and can be used to debug and analyze web applications on desktop Firefox, Firefox OS, and Firefox for Android.

    cola1

    This is the first in a series of posts in which we’ll look at where the developer tools have got to since Firefox 4. We’ll present each tool in a short screencast, then wrap things up with a couple of screencasts illustrating specific workflow patterns that should help you get the most of the developer tools. These will include scenarios such as mobile development, and altering and debugging CSS based applications, etc.

    In this first post we present the latest Web Console and JavaScript Debugger.

    Web Console

    The Web Console is primarily used to display information associated with the currently loaded web page. This includes HTML, CSS, JavaScript, and Security warnings and errors. In addition network requests are displayed and the console indicates whether they succeeded or failed. When warnings and errors are detected the Web Console also offers a link to the line of code which caused the problem. Often the Web Console is the first stop in debugging a Web Application that is not performing correctly.

    webconsole

    The Web Console also lets you execute JavaScript within the context of the page. This means you can inspect objects defined by the page, execute functions within the scope of the page, and access specific elements using CSS selectors. The following screencast presents an overview of the Web Console’s features.

    Take a look at the MDN Web Console documentation for more information.

    JavaScript Debugger

    The JavaScript Debugger is used for debugging and refining JavaScript that your Web Application is currently using. The Debugger can be used to debug code running in Firefox OS and Firefox for Android, as well as Firefox Desktop. It’s a full-featured debugger providing watch expressions, scoped variables, breakpoints, conditional expressions, step-through, step-over and run to end functionality. In addition you can change the values of variables at run time while the debugger has paused your application.

    debugger

    The following screencast illustrates some of the features of the JavaScript Debugger.

    For more information on the JavaScript Debugger, see the MDN Debugger documentation.

    Learn More

    These screencasts give a quick introduction to the main features of these tools. For the full details on all of the developer tools, check out the full MDN Tools documentation.

    Coming Up

    In the next post in this series we will be delving into the Style Editor and the Scratchpad. Please give us your feedback on what features you would like to see explained in more detail within the comments.

  6. Firefox 24 for Android gets WebRTC support by default

    WebRTC is now on Firefox for Android as well as Firefox Desktop! Firefox 24 for Android now supports mozGetUserMedia, mozRTCPeerConnection, and DataChannels by default. mozGetUserMedia has been in desktop releases since Firefox 20, and mozPeerConnection and DataChannels since Firefox 22, and we’re excited that Android is now joining Desktop releases in supporting these cool new features!

    What you can do

    With WebRTC enabled, developers can:

    • Capture camera or microphone streams directly from Firefox Android using only JavaScript (a feature we know developers have been wanting for a while!),
    • Make browser to browser calls (audio and/or video) which you can test with sites like appspot.apprtc.com, and
    • Share data (no server in the middle) to enable peer-to-peer apps (e.g. text chat, gaming, image sharing especially during calls)

    We’re eager to see the ideas developers come up with!

    For early adopters and feedback

    Our support is still largely intended for developers and for early adopters at this stage to give us feedback. The working group specs are not complete, and we still have more features to implement and quality improvements to make. We are also primarily focused now on making 1:1 (person-to-person) calling solid — in contrast to multi-person calling, which we’ll focus on later. We welcome your testing and experimentation. Please give us feedback, file bug reports and start building new applications based on these new abilities.

    If you’re not sure where to start, please start by reading some of the WebRTC articles on Hacks that have already been published. In particular, please check out WebRTC and the Early API, The Making of Face to GIF, and PeerSquared as well as An AR Game (which won our getUserMedia Dev Derby) and WebRTC Experiments & Demos.

    An example of simple video frame capture (which will capture new images at approximately 15fps):

    navigator.getUserMedia({video: true, audio: false}, yes, no);
    video.src = URL.createObjectURL(stream);
     
    setInterval(function () {
      context.drawImage(video, 0,0, width,height);
      frames.push(context.getImageData(0,0, width,height));
    }, 67);

    Snippet of code taken from “Multi-person video chat” on nightly-gupshup (you can try it in the WebRTC Test Landing Page — full code is on GitHub)

    function acceptCall(offer) {
        log("Incoming call with offer " + offer);
     
        navigator.mozGetUserMedia({video:true, audio:true}, function(stream) {
        document.getElementById("localvideo").mozSrcObject = stream;
        document.getElementById("localvideo").play();
        document.getElementById("localvideo").muted = true;
     
        var pc = new mozRTCPeerConnection();
        pc.addStream(stream);
     
        pc.onaddstream = function(obj) {
          document.getElementById("remotevideo").mozSrcObject = obj.stream;
          document.getElementById("remotevideo").play();
        };
     
        pc.setRemoteDescription(new mozRTCSessionDescription(JSON.parse(offer.offer)), function() {
          log("setRemoteDescription, creating answer");
          pc.createAnswer(function(answer) {
            pc.setLocalDescription(answer, function() {
              // Send answer to remote end.
              log("created Answer and setLocalDescription " + JSON.stringify(answer));
              peerc = pc;
              jQuery.post(
                "answer", {
                  to: offer.from,
                  from: offer.to,
                  answer: JSON.stringify(answer)
                },
                function() { console.log("Answer sent!"); }
              ).error(error);
            }, error);
          }, error);
        }, error);
      }, error);
    }
     
    function initiateCall(user) {
        navigator.mozGetUserMedia({video:true, audio:true}, function(stream) {
        document.getElementById("localvideo").mozSrcObject = stream;
        document.getElementById("localvideo").play();
        document.getElementById("localvideo").muted = true;
     
        var pc = new mozRTCPeerConnection();
        pc.addStream(stream);
     
        pc.onaddstream = function(obj) {
          log("Got onaddstream of type " + obj.type);
          document.getElementById("remotevideo").mozSrcObject = obj.stream;
          document.getElementById("remotevideo").play();
        };
     
        pc.createOffer(function(offer) {
          log("Created offer" + JSON.stringify(offer));
          pc.setLocalDescription(offer, function() {
            // Send offer to remote end.
            log("setLocalDescription, sending to remote");
            peerc = pc;
            jQuery.post(
              "offer", {
                to: user,
                from: document.getElementById("user").innerHTML,
                offer: JSON.stringify(offer)
              },
              function() { console.log("Offer sent!"); }
            ).error(error);
          }, error);
        }, error);
      }, error);
    }

    Any code that runs on Desktop should run on Android. (Ah, the beauty of HTML5!) However, you may want to optimize for Android knowing that it could now be used on a smaller screen device and even rotated.

    This is still a hard-hat area, especially for mobile. We’ve tested our Android support of 1:1 calling with a number of major WebRTC sites, including talky.io, apprtc.appspot.com, and codeshare.io.

    Known issues

    • Echo cancellation needs improvement; for calls we suggest a headset (Bug 916331)
    • Occasionally there are audio/video sync issues or excessive audio delay. We already have a fix in Firefox 25 that will improve delay (Bug 884365).
    • On some devices there are intermittent video-capture crashes; we’re actively investigating (Bug 902431).
    • Lower-end devices or devices with poor connectivity may have problems decoding or sending higher-resolution video at good frame rates.

    Please help us bring real-time communications to the web: build your apps, give us your feedback, report bugs, and help us test and develop. With your help, your ideas, and your enthusiasm, we will rock the web to a whole new level.

  7. CSS Length Explained

    When styling a web site with CSS you might have realised that an inch on a screen is not an actual inch, and a pixel is not necessarily an actual pixel. Have you ever figured out how to represent the speed of light in CSS pixels? In this post, we will explore the definition of CSS length units starting by understanding some of the physical units with the same name, in the style of C.G.P. Grey[1].

    The industrial inch (in)

    People who live in places where the inch is a common measure are already familiar with the physical unit. For the rest of us living in places using the metric system, since 1933, the “industrial inch” has been defined as mathematical equivalent of 2.54 centimeters, or 0.0254 metres.

    The device pixel

    Computer screens display things in pixels. The single physical “light blob” on the display, capable of displaying the full color independent of it’s neighbour is called a pixel (picture element). In this post, we refer to the physical pixel on the screen as “device pixel” (not to be confused with CSS pixel which will be explained later on).

    Display pixel density, dots per inch (DPI), or pixels per inch (ppi)

    The physical dimension of a device pixel on a specific device can be derived from the display pixel density given by the device manufacturer, usually in dots per inch (DPI), or pixels per inch (PPI). Both units are essentially talking about the same thing when they refer to a screen display, where DPI is the commonly-used-but-incorrect unit and PPI is the more-accurate-but-no-one-cares one. The physical dimension of a device pixel is simply the inverse number of it’s DPI.

    The MacBook Air (2011) I am currently using comes with a 125 DPI display, so

    (width or height of one device pixel) = 1/125 inch = 0.008 inch = 0.02032 cm

    Obviously this is a number too small to be printed on the specification, so the DPI stays.

    The CSS pixel (px)

    The dimension of a CSS pixel can be roughly regarded as a size to be seen comfortably by the naked human eye, not too small so you’d have to squint, and not big enough for you to see pixelation. Instead of consulting your ophthalmologist on the definition of “seen comfortably”, the W3C CSS specification gives us a recommend reference:

    The reference pixel is the visual angle of one pixel on a device with a pixel density of 96 DPI and a distance from the reader of an arm’s length.

    The proper dimension of a CSS pixel is actually dependent on the distance between you and the display. With the exception of Google Glass (which mounts on your head), people usually find their own unique comfortable distance, dependent on their eyesight, for a particular device. Given the fact we have no way to tell whether the user is visually impaired, what concerns us is simply typical viewing distance for the given device form factor — e.g., a mobile phone is usually held closer, and a laptop is usually used on a desk or a lap. Thus, the “viewing distance” of mobile phones is shorter than the one of laptops, or desktop computers.

    The viewing distance

    As previously mentioned, the viewing distance varies from person to person and from device to device, which is why we must categorise devices into form factors. The recommended reference viewing distance (“an arm’s length”) and the reference pixel density (“96 DPI”) is actually historical; it is a testimony of the way people during the late-20th century usually accessed the web:

    The first computer that runs the web. From Wikipedia: WWW

    The first computer that runs the web. From Wikipedia: WWW.

    For day-to-day devices of the 21st century, we have different reference recommendations:

    Baseline pixel density

    Width/height of one CSS pixel

    Viewing distance

    A 20th century PC with CRT display

    96 DPI

    ~0.2646 mm (1/96in)

    28 in (71.12cm)

    Modern laptop with LCD[2]

    125 DPI

    0.2032 mm (1/125in)

    21.5 in (54.61cm)

    Smartphones/Tablets[3]

    160 DPI

    ~0.159mm (1/160 in)

    16.8in (42.672cm)

    From this table, it’s pretty easy to see that as the pixel density increases and size of a CSS pixel gets smaller, muggles like you and me usually need to hold the device closer to comfortably see what’s on the device screen.

    With that, we established a basic fact in the world of CSS: a CSS pixel will be displayed in different physical dimensions but it will always be displayed in the correct size in which the viewer will find comfortable. By leveraging the principle, we can safely set the basic dimensions (e.g. base font size) to a fixed pixel size, independent of device form factors.

    CSS inch (in)

    On a computer screen, a CSS inch has nothing to do with the physical inch. Instead, it is being redefined to be exactly equal to 96 CSS pixels. This has resulted in an awkward situation, where you can never reliably draw an accurate ruler on the screen with basic CSS units[4]. Yet this gives us what’s intended: elements sized in CSS units will always be displayed across devices in a way where the user will feel comfortable.

    As a side note, if the user prints out the page on a piece of paper, browsers will map the CSS inch to the physical inch. You can reliably draw an accurate ruler with CSS and print it out. (make sure you’d turn off “scale to fit” in the printer settings!)

    Device pixel ratio (DPPX)

    As we step into the future (where is my flying car?), many of the smartphones nowadays are shipped with high-density displays. In order to make sure that CSS pixels are sized consistently across every device that accesses the web (i.e. everything with a screen and network connection), device manufacturers had to map multiple device pixels to one CSS pixel to make up for it’s relative bigger physical size. The ratio of the dimension of CSS pixel relative to device pixels is the device pixel ratio (DPPX).

    Let’s take iPhone 4 as the most famous example. It comes with a 326 DPI display. According to our table above, as a smartphone, it’s typical viewing distance is 16.8 inches and it’s baseline pixel density is 160 DPI. To create one CSS pixel, Apple chose to set the device pixel ratio to 2, which effectively makes iOS Safari display web pages in the same way as it would on a 163 DPI phone.

    Before we move on, take a look back on the numbers above. We can actually do better by not setting device pixel ratio to 2 but to 326/160 = 2.0375, and make a CSS pixel exactly the same compared to the reference dimensions. Unfortunately, such ratio will result in an unintended consequence: as each CSS pixel is not being displayed by whole device pixels, the browser would have to make some efforts to anti-alias all the bitmap images, borders, etc. since almost always they are defined as whole CSS pixels. It’s hard for browsers to utilize 2.0375 device pixels to draw your 1 CSS pixel-wide border: it’s way more easier to do it if the ratio is simply 2.

    Incidentally, 163 DPI happens to be the pixel density of the previous generation of the iPhone, so the web will work the same way without having developers to do any special “upgrades” to their websites.

    Device manufacturers usually choose 1.5, or 2, or other whole numbers as the DPPX value. Occasionally, some devices decided not to play nice and shipped with something like 1.325 DPPX; as web developers, we should probably ignore those devices.

    Firefox OS, initially being a mobile phone operating system, implemented DPPX calculation in this way. The actual DPPX will be determined by the manufacturer of each shipping device.

    CSS point (pt)

    Point is a commonly used unit that came from the typography industry, as the unit for metal typesetting. As the world gradually moved from letterpress printing to desktop publishing, a “PostScript point” was then redefined as 1/72 inches. CSS follows the same convention and mapped 1 CSS point to 1/72 CSS inches, and 96/72 CSS pixels.

    You can easily see that just like a CSS inch, on a device display, CSS point has little to do with the traditional unit. Its size only matches it’s desktop publishing counterpart when we actually print out the web page.

    CSS pica (pc), CSS centimeter (cm), CSS millimeter (mm)

    Just like CSS inch, while their relative relationships are kept, their basic size on the screen have been redefined by CSS pixel, instead of the standard SI unit (metre), which is defined by the speed of light, a universal constant.

    We could literally redefine the speed of light in CSS; it is 1,133,073,857,007.87 CSS pixels per second[5] — relativity in CSS makes light travel a bit slower on devices with smaller form factors than traditional PCs, from our perspective, looking into the screen from the real world.

    The viewport meta tag

    Though the smartphone is handy in the palm of your hand and its guaranteed that CSS pixels will be displayed in a size comfortable to users, a device capable of showing only a part of a fixed-width desktop website is not going to be very useful. It would be equally not useful if the phone violated the CSS unit rules and pretended it to be something else.

    The introduction of the viewport meta tag brought the best of both worlds to mobile devices by giving the control of page scaling to both users and web developers. You could design a mobile layout, and opt out of viewport scaling, or, you could leave your website made for desktop browsers as-is and have the mobile browser scale down the page for you and the user. As always, detailed descriptions and usage can be found on Mozilla Developer Network.

    Conclusion

    Browser vendors, while being in competition, recognize the effort of maintaining the stability of the Web platform and coordinate their feature sets through a standard organization. Features and APIs exposed will be carefully tested for their usefulness across all scenarios, before declaring their suitability as a standard. The definition of CSS pixel has one of those since the beginning. New features introduced must maintain backward-compatibility instead of changing the old behavior[6], so many of them (device pixels, viewport meta tag, etc.) are being introduced as extra layers of complexity. Old web pages that use standardized features, thus have a built-in “forward-compatibility”.

    With that in mind, Mozilla, together with our partners, nurture and defend the Open Web — the unique platform that we all cherish.


    [1] Well, not actually, cause we are not going to make a video about it. I won’t mind if C.G.P. Grey actually made a video about this!

    [2] Appears to be common values for laptops[citation needed].

    [3] The typical value is documented here as a “mdpi” Android device.

    [4] With one exception: the non-standard CSS unit mozmm gives you the ability to do so provided that Firefox knows the pixel density it runs on. This is out of scope of our topic there.

    [5] 299,792,458 metres per second ÷ 0.0254 meters per inch x 96 pixels per inch

    [6] There was a brief period of time where people tried to come up with an entirely new standard that breaks backward compatibility (*cough* xHTML *cough*), but that’s a story for another time.

  8. Getting Started With HTML5 Game Development

    There are plenty of valid ways to create an HTML5 game, and quite a bit of material on the technical aspect of each, so for this article I’ll be giving more of a broad overview of HTML5 game development. How “HTML5″ can be better than native, where to start with the development process, where to go when you’re stuck, and how to monetize and distribute games.

    Benefits of HTML5

    Most of the audience here already sees the value in HTML5, but I want to re-iterate why you should be building an HTML5 game. If you are just targeting iOS for your game, write the game in Objective-C, the cons outweigh the benefits in that scenario… but if you want to build a game that works on a multitude of platforms, HTML5 is the way to go.

    Cross-Platform

    One of the more obvious advantages of HTML5 for games is that the games will work on any modern device. Yes, you will have to put extra thought into how your game will respond to various screen sizes and input types, and yes, you might have to do a bit of ‘personalization’ in the code per platform (the main inhibitor being audio); but it’s far better than the alternative of completely porting the game each time.

    I see too many games that don’t work on mobile and tablets, and in most instances that really is a huge mistake to make when developing your game – keep mobile in mind when developing your HTML5 game!

    Unique Distribution

    Most HTML5 games that have been developed to this point are built in the same manner as Flash and native mobile games. To some extent this makes sense, but what’s overlooked is the actual benefits The Web as a platform adds. It’s like if an iOS developer were to build a game that doesn’t take advantage of how touch is different from a mouse – or if Doodle Jump was built with arrow keys at the bottom of the screen instead of using the device’s accelerator.

    It’s so easy to fall into the mindset of doing what has worked in the past, but that stifles innovation. It’s a trap I’ve fallen into – trying to 100% emulate what has been successful on iOS, Android, and Flash – and it wasn’t until chatting with former Mozillian Rob Hawkes before I fully realized it. While emulating what worked in the past is necessary to an extent, The Open Web is a different vehicle for games, and innovation can only happen when taking a risk and trying something new.

    Distribution for HTML5 games is often thought of as a weakness, but that’s just because we’ve been looking at it in the same sense as native mobile games, where a marketplace is the only way to find games. With HTML5 games you have the incredible powerful hyperlink. Links can so easily be distributed across the web and mobile devices (think of how many links you click in the Facebook and Twitter apps), and it certainly should not just be limited to the main page for the game. The technology is there to be able to link to your game and do more interesting things like jump to a specific point in a game, try to beat a friend’s score, or play real-time against that friend – use it to your advantage!

    Take a good look at was has worked for the virality of websites and apply those same principles to your games.

    Quicker Development Process

    No waiting for compilation, updates and debugging in real-time, and once the game is done, you can push out the update immediately.

    Choosing a Game Engine

    Game engines are just one more level of abstraction that take care of a few of the more tedious tasks of game development. Most take care of asset loading, input, physics, audio, sprite maps and animation, but they vary quite a bit. Some engines are pretty barebones, while some (ImpactJS for example) go as far as including a 2D level editor and debug tools.

    Decide Whether or Not You Need a Game Engine

    This is largely a personal decision. Game Engines will almost always reduce the time it takes for you to create a fully-functional game, but I know some folks just like the process of building everything from the ground up so they can better understand every component of the game.

    For simple games, it really isn’t difficult to build from scratch (assuming you have a JavaScript background and understand how games work). Slime Volley (source) for example was built without having a game engine, and none of the components were rocket science. Of course, Slime Volley is a very basic game, building an RPG from the ground up would likely lead to more hair pulling.

    Choosing Between a “Game Engine” and a “Game Maker”

    Most of the typical audience of Mozilla Hacks are probably going to lean toward using a game engine or building from scratch, but there is also the alternative of using a “Game Maker” like Construct 2. Using a Game Maker means you won’t actually write in JavaScript; instead, you create code-like events in the editor. It’s a trade of ease-of-use and quickness to prototype/develop vs customization and control over the end result. I’ve seen some very impressive games built with either, but as a developer-type, I tend to favor writing from scratch / using an engine.

    Finding the Right Game Engine / Game Maker for you

    There are so many HTML5 game engines out there, which in part is a good thing, but can also be a bad thing since a large percentage have either already stopped being maintained, or will soon stop being maintained. You definitely want to pick an engine that will continually be updated and improved over the years to come.

    HTML5GameEngine.com is a great place to start your search because the hundreds of game engines are narrowed down to about 20 that are established, actively maintained, and have actual games being developed with them.

    For a more complete list of engines (meaning there can be some junk to sift through), this list on GitHub is your best bet.

    Learning Tools

    If you’re going with a game engine, typically their site is the best resource with tutorials and documentation.

    Technical Tutorials

    Game Design Tutorials

    With game development, the technical aspect isn’t everything – what’s more important is that the game actually be fun. Below are a few places to start when learning about game mechanics.

    Helpful Game Tools

    User Retention, Monetization and more

    Full disclosure here: I am a co-founder at Clay.io.

    Making a game function is just part of the equation. You also want players to play longer, come back, tell their friends about it, and maybe even buy something. Common elements in games that focus on these areas are features like user accounts, high scores, achievements, social integration, and in-game payments. On the surface most are typically easy enough to implement, but there are often many cross-platform issues and intricacies that are overlooked. There is also value in having a central service running these across many games – for example, players genuinely care about achievements on Xbox Live because Gamerscore matters to them.

    • Clay.io – user accounts, high scores, achievements, in-game payments, analytics, distribution, and more.
    • Scoreoid – similar to above.

    Development Tools

    • stats.js – A JavaScript performance monitor. Displays framerate, and performance over time.
    • Socket.IO – realtime client-server communication (if you’re going to have a backend for your game).
    • pixi.js – A canvas and WebGL rendering engine.
    • CocoonJS – Improves HTML5 game performance on iOS and Android with an accelerated canvas bound to OpenGL ES.

    Motivation

    Regardless of what you’re building, extra motivation is always helpful. For games, that motivation often comes from surrounding yourself with others who are in the same boat as you – working on games.

    js13kGames

    js13kGames is a competition that is currently taking place at the time of this writing. You have until September 13th, 2013 to develop an HTML5 game that, when compressed, is less than 13kb.

    Mozilla Game On

    Mozilla runs a game competition every year from December through February with some fantastic prizes – last year’s was an all-expense paid, red carpet trip to San Francisco for GDC 2013.

    Clay.io’s “Got Game?”

    Clay.io (full disclosure, I am a founder) runs an annual HTML5 game development competition for students. Last year was the first year and we had over 70 games submitted. The next competition is planned for February / March 2014.

    Ludum Dare

    Ludum Dare isn’t for tangible prizes, nor is is specific to HTML5 games, but there are plenty of HTML5 developers that participate.

    One Game a Month

    One Game a Month isn’t so much a competition as it is an accountability tool. This isn’t restricted to HTML5 games, but many of the participants work with HTML5. The goal is to crank out one game every month. I wouldn’t recommend this long-term since one month is too short of a time to create a great game, but it’s good when learning to force yourself to develop and finish simple games.

    Help From the Community

    HTML5GameDevs.com

    HTML5GameDevs has quickly become the most active community of HTML5 game developers. Most folks are very friendly and willing to help with any issues you run into.

    #BBG

    #BBG is the go-to IRC channel for HTML5 games – you’ll even find quite a few Mozillians hanging around.

    How to Make Money

    In-Game Purchases

    In-game payments, in my opinion, are the way to go for HTML5 game in the long-term. For now, most HTML5 games don’t have enough quality content, nor the game mechanics in place to get player purchasing items.

    This is the revenue model with the highest potential, but it’s also the most difficult to achieve by far – not technically, but having the right game. I’d say the best way to learn how to properly monetize your game in this aspect is to take a look at games that do it really well on Flash and Mobile – games from King.com and Zynga typically have this nailed down pretty well. There’s also some good reading material, like The Top F2P Monetization Tricks on Gamasutra.

    Licensing

    Where we’re at right now with HTML5 games, licensing games is the strongest, most consistent way to make money – if and only if your game works well on mobile devices.

    There are countless “Flash Game Portals” that receive organic mobile traffic, but can’t monetize it with the Flash games they have. Their solution is to go out and find HTML5 games to buy non-exclusive licenses (the right to put the game on their site, often making small adjustments) to offer their mobile visitors.

    Typically non-exclusive HTML5 game licenses (meaning you can sell to more than one site) go for $500-$1,000 depending on the game and publisher. Some publishers will do a revenue share model instead where you’ll get a 40-50% share on any advertising revenue, but no up-front money.

    Licensing is the safest way to make money right now, but the cap is limited – the most you’re going to make with a single game is in the $5,000-$6,000 range, but it is easier to hit that mark than it is with in-game payments or advertising.

    Advertising

    Advertising is the middle-ground between in-game payments and licensing. It’s easier than in-game payments to make money and with a higher potential cap than licensing (but probably less than in-game payments). It’s easy enough to implement ads: just pick your ad network of choice (be wary of Adsense’s strict terms) and implement them either surrounding the game, or at various stopping points.

    The commonly used ad networks are LeadBolt for mobile and CPMStar for desktop. You can also use Clay.io which makes it a bit easier to implement advertising, and tries to maximize the revenue by using different ad networks depending on the device used and other factors.

    Distribution

    The final stage in a game’s development is distribution. The game is done, now you want people playing the game! Fortunately, with HTML5 there are plenty of places to have your game (many of which often go unused).

    More and more marketplaces these days are accepting HTML5 games as-is. Each has their own requirements (Facebook requires SSL, most require a differently formatted manifest file, etc…), but the time it takes to get into each is typically less than 30 minutes. If you want to reduce that even more, Clay.io helps auto-generate the manifest files and promotional image assets you’ll need (as well as takes care of the SSL requirement) – documentation on that here.

    Some marketplaces you’ll need to have a native wrapper for your game – primarily the iOS App Store and Google Play. A wrapper like PhoneGap is one option, but the native webviews have pretty terrible JavaScript engines, so for now you’re better off with tools like CocoonJS and Ejecta.

    Now it’s up to you to go forth and make a great, innovative web game – I’m looking forward to see what’s on the horizon in the coming months and years!

  9. New Features of Firefox Developer Tools: Episode 25

    Firefox 25 was just uplifted to the Aurora release channel which means we are back to report about new features in Firefox Developer Tools.

    Here’s a summary of some of the most exciting new features, and to get the whole picture you can check the complete list of resolved bugzilla tickets.

    Black box libraries in the Debugger

    In modern web development, we often rely on libraries like JQuery, Ember, or Angular, and 99% of the time we can safely assume that they “just work”. We don’t care about the internal implementation of these libraries: we treat them like a black box. However, a library’s abstraction leaks during debugging sessions when you are forced to step through its stack frames in order to reach your own code. To alleviate this problem, we introduced black boxing: a feature where you can tell the debugger to ignore the details of selected sources.

    To black box a source, you can either mark them one at a time by disabling the little eyeball next to it in the sources list:
    eyeball

    Or you can black box many sources at once by bringing up the developer toolbar with Shift+F2 and using the dbg blackbox command:

    dbg blackbox --glob *-min.js[source]

    When a source is black boxed:

    • Any breakpoints it may have are disabled.
    • When “pause on exceptions” is enabled, the debugger won’t pause when an exception is thrown in the black boxed source; instead it will wait until (and if) the stack unwinds to a frame in a source that isn’t black boxed.
    • The debugger will skip through black boxed sources when stepping.

    To see this in action and learn more about the details, check out the black boxing screencast on YouTube.

    Replay and edit requests in the Network Monitor

    You can now debug a network request by modifying headers before resending it. Right-click on an existing request and select the “resend” context menu item:

    resend request

    Now you can tweak the HTTP method, URL, headers, and request body before sending the request off again:

    tweak

    CSS Autocompletion in the inspector

    Writing CSS in the inspector is now much easier as we enabled autocompletion of CSS properties and values.

    autocomplete

    What’s more, it even works on inline style attributes

    inline

    Aside: this feature was implemented by contributors Girish Sharma and Mina Almasry. If you want to take your tools into your own hands too, check out our wiki page on how to get involved with developer tools.

    Execute JS in the current paused frame

    One request we’ve heard repeatedly is the ability to execute JS from the webconsole in the scope of the current paused frame in the debugger rather than the global scope. This is now possible. Using the webconsole to execute JS in the current frame can make it much easier to debug your apps.

    Edit: The webconsole has actually been executing in the current frame since Firefox 23, in Firefox 25 the scratchpad will execute in the current frame as well.

    Import and export profiled data in the Profiler

    Hacking on a shared project and think you found a performance regression in some bit of code owned by one of your friends? Don’t just file a github issue with steps to reproduce the slowness, export and attach a profile of the code that shows exactly how much slowness there is, and where it occurs. Your friend will thank you when he or she is trying to reproduce and debug the regression. Click the “import” button next to the start profiling button to load a profile from disk, and hit “save” on an existing profile to export it.

    profileimport

    When can I use these features?

    All of these features and more are available in the Aurora release channel. In another 12 weeks, these features will roll over into Firefox stable.

    Have some feedback about devtools? Ping @FirefoxDevTools on Twitter, or swing by #devtools on irc.mozilla.org.

  10. WebRTC and the Early API

    In my last article, WebRTC and the Ocean of Acryonyms, I went over the networking terminology behind WebRTC. In this sequel of sorts, I will go over the new WebRTC API in great laboring detail. By the end of it you should have working peer-to-peer DataChannels and Media.

    Shims

    As you can imagine, with such an early API, you must use the browser prefixes and shim it to a common variable.

    var PeerConnection = window.mozRTCPeerConnection || window.webkitRTCPeerConnection;
    var IceCandidate = window.mozRTCIceCandidate || window.RTCIceCandidate;
    var SessionDescription = window.mozRTCSessionDescription || window.RTCSessionDescription;
    navigator.getUserMedia = navigator.getUserMedia || navigator.mozGetUserMedia || navigator.webkitGetUserMedia;

    PeerConnection

    This is the starting point to creating a connection with a peer. It accepts information about which servers to use and options for the type of connection.

    var pc = new PeerConnection(server, options);

    server

    The server object contains information about which TURN and/or STUN servers to use. This is required to ensure most users can actually create a connection by avoiding restrictions in NAT and firewalls.

    var server = {
        iceServers: [
            {url: "stun:23.21.150.121"},
            {url: "stun:stun.l.google.com:19302"},
            {url: "turn:numb.viagenie.ca", credential: "webrtcdemo", username: "louis%40mozilla.com"}
        ]
    }

    Google runs a public STUN server that we can use. I also created an account at http://numb.viagenie.ca/ for a free TURN server to access. You may want to do the same and replace with your own credentials.

    options

    Depending on the type of connection, you will need to pass some options.

    var options = {
        optional: [
            {DtlsSrtpKeyAgreement: true},
            {RtpDataChannels: true}
        ]
    }

    DtlsSrtpKeyAgreement is required for Chrome and Firefox to interoperate.

    RtpDataChannels is required if we want to make use of the DataChannels API on Firefox.

    ICECandidate

    After creating the PeerConnection and passing in the available STUN and TURN servers, an event will be fired once the ICE framework has found some “candidates” that will allow you to connect with a peer. This is known as an ICE Candidate and will execute a callback function on PeerConnection#onicecandidate.

    pc.onicecandidate = function (e) {
        // candidate exists in e.candidate
        if (e.candidate == null) { return }
        send("icecandidate", JSON.stringify(e.candidate));
        pc.onicecandidate = null;
    };

    When the callback is executed, we must use the signal channel to send the Candidate to the peer. On Chrome, multiple ICE candidates are usually found, we only need one so I typically send the first one then remove the handler. Firefox includes the Candidate in the Offer SDP.

    Signal Channel

    Now that we have an ICE candidate, we need to send that to our peer so they know how to connect with us. However this leaves us with a chicken and egg situation; we want PeerConnection to send data to a peer but before that we need to send them metadata…

    This is where the signal channel comes in. It’s any method of data transport that allows two peers to exchange information. In this article, we’re going to use FireBase because it’s incredibly easy to setup and doesn’t require any hosting or server-code.

    For now just imagine two methods exist: send() will take a key and assign data to it and recv() will call a handler when a key has a value.

    The structure of the database will look like this:

    {
        "<roomId>": {
            "candidate:<peerType>": …
            "offer": …
            "answer": …
        }
    }

    Connections are divided by a roomId and will store 4 pieces of information, the ICE candidate from the offerer, the ICE candidate from the answerer, the offer SDP and the answer SDP.

    Offer

    An Offer SDP (Session Description Protocol) is metadata that describes to the other peer the format to expect (video, formats, codecs, encryption, resolution, size, etc etc).

    An exchange requires an offer from a peer, then the other peer must receive the offer and provide back an answer.

    pc.createOffer(function (offer) {
        pc.setLocalDescription(offer);
     
        send("offer", JSON.stringify(offer));
    }, errorHandler, constraints);

    errorHandler

    If there was an issue generating an offer, this method will be executed with error details as the first argument.

    var errorHandler = function (err) {
        console.error(err);
    };

    constraints

    Options for the offer SDP.

    var constraints = {
        mandatory: {
            OfferToReceiveAudio: true,
            OfferToReceiveVideo: true
        }
    };

    OfferToReceiveAudio/Video tells the other peer that you would like to receive video or audio from them. This is not needed for DataChannels.

    Once the offer has been generated we must set the local SDP to the new offer and send it through the signal channel to the other peer and await their Answer SDP.

    Answer

    An Answer SDP is just like an offer but a response; sort of like answering the phone. We can only generate an answer once we have received an offer.

    recv("offer", function (offer) {
        offer = new SessionDescription(JSON.parse(offer))
        pc.setRemoteDescription(offer);
     
        pc.createAnswer(function (answer) {
            pc.setLocalDescription(answer);
     
            send("answer", JSON.stringify(answer));
        }, errorHandler, constraints);
    });

    DataChannel

    I will first explain how to use PeerConnection for the DataChannels API and transferring arbitrary data between peers.

    Note: At the time of this article, interoperability between Chrome and Firefox is not possible with DataChannels. Chrome supports a similar but private protocol and will be supporting the standard protocol soon.

    var channel = pc.createDataChannel(channelName, channelOptions);

    The offerer should be the peer who creates the channel. The answerer will receive the channel in the callback ondatachannel on PeerConnection. You must call createDataChannel() once before creating the offer.

    channelName

    This is a string that acts as a label for your channel name. Warning: Make sure your channel name has no spaces or Chrome will fail on createAnswer().

    channelOptions

    var channelOptions = {};

    Currently these options are not well supported on Chrome so you can leave this empty for now. Check the RFC for more information about the options.

    Channel Events and Methods

    onopen

    Executed when the connection is established.

    onerror

    Executed if there is an error creating the connection. First argument is an error object.

    channel.onerror = function (err) {
        console.error("Channel Error:", err);
    };

    onmessage

    channel.onmessage = function (e) {
        console.log("Got message:", e.data);
    }

    The heart of the connection. When you receive a message, this method will execute. The first argument is an event object which contains the data, time received and other information.

    onclose

    Executed if the other peer closes the connection.

    Binding the Events

    If you were the creator of the channel (meaning the offerer), you can bind events directly to the DataChannel you created with createChannel. If you are the answerer, you must use the ondatachannel callback on PeerConnection to access the same channel.

    pc.ondatachannel = function (e) {
        e.channel.onmessage = function () {};
    };

    The channel is available in the event object passed into the handler as e.channel.

    send()

    channel.send("Hi Peer!");

    This method allows you to send data directly to the peer! Amazing. You must send either String, Blob, ArrayBuffer or ArrayBufferView, so be sure to stringify objects.

    close()

    Close the channel once the connection should end. It is recommended to do this on page unload.

    Media

    Now we will cover transmitting media such as audio and video. To display the video and audio you must include a <video> tag on the document with the attribute autoplay.

    Get User Media

    <video id="preview" autoplay></video>
     
    var video = document.getElementById("preview");
    navigator.getUserMedia(mediaOptions, function (stream) {
        video.src = URL.createObjectURL(stream);
    }, errorHandler);

    mediaOptions

    Constraints on what media types you want to return from the user.

    var mediaOptions = {
        video: true,
        audio: true
    };

    If you just want an audio chat, remove the video key.

    errorHandler

    Executed if there is an error returning the requested media.

    Media Events and Methods

    addStream

    Add the stream from getUserMedia to the PeerConnection.

    pc.addStream(stream);

    onaddstream

    Executed when the connection has been setup and the other peer has added the stream to the peer connection with addStream. You need another <video> tag to display the other peer’s media.

    <video id="otherPeer" autoplay></video>
     
    var otherPeer = document.getElementById("otherPeer");
    pc.onaddstream = function (e) {
        otherPeer.src = URL.createObjectURL(e.stream);
    };

    The first argument is an event object with the other peer’s media stream.

    View the Source already

    You can see the source built up from all the snippets in this article at my WebRTC repo.