Mozilla

Featured Articles

Sort by:

View:

  1. Launching developer Q&A on Stack Overflow

    One thing that is very important for us at Mozilla is the need to directly interact with you developers and help you with challenges and issues while developing using open technologies. We are now happy to announce our presence on Stack Overflow!

    Stack Overflow is one of – if not the – most well-known question and answer sites among developers, and we believe it’s very important to go where developers are and choose to have their discussions while developing.

    Initially our main focus is on Firefox OS app development and Open Web Apps in general. There are three tags that we are monitoring:

    All tags and posts are available at our landing page on Stack Overflow: http://stackoverflow.com/r/mozilla

    Take part!

    We’d love for you to participate with both questions and answers, so we together can collaborate and make the Open Web and open technologies the first choice for developers!

    A little extra motivation might be that if you answer a lot of questions, you’ll show up in the Top Users page for that tag, e.g. http://stackoverflow.com/tags/firefox-os/topusers

    Resources that might be good for you about HTML5, Apps, Developer Tools and more are:

    See you on Stack Overflow!

  2. Introducing the Whiteboard Drum – WebRTC and Web Audio API magic

    Browser functionality has expanded rapidly, way beyond merely “browsing” a document. Recently, Web browsers finally gained audio processing abilities with the Web Audio API. It is powerful to the point of building serious music applications.

    Not only that, but it is also very interesting when combined with other APIs. One of these APIs is getUserMedia(), which allows us to capture audio and/or video from the local PC’s microphone / camera devices. Whiteboard Drum (code on GitHub) is a music application, and a great example of what can be achieved using Web Audio API and getUserMedia().

    I demonstrated Whiteboard Drum at the Web Music Hackathon Tokyo in October. It was a very exciting event on the subject of Web Audio API and Web MIDI API. Many instruments can collaborate with the browser, and it can also create new interfaces to the real world.

    I believe this suggests further possibilities of Web-based music applications, especially using Web Audio API in conjunction with other APIs. Let’s explain how the key features of Whiteboard Drum work, showing relevant code fragments as we go.

    Overview

    First of all, let me show you a picture from the Hackathon:

    And a easy movie demo:

    As you can see, Whiteboard Drum plays a rhythm according to the matrix pattern on the whiteboard. The whiteboard has no magic; it just needs to be pointed at by a WebCam. Though I used magnets in the demo, you can draw the markers using pen if you wish. Each row represents the corresponding instruments of Cymbal, Hi-Hat, Snare-Drum and Bass-Drum, and each column represents timing steps. In this implementation, the sequence has 8 steps. The color blue will activate the grid normally, and the red will activate with accent.

    The processing flow is:

    1. The whiteboard image is captured by the WebCam
    2. The matrix pattern is analysed
    3. This pattern is fed to the drum sound generators to create the corresponding sound patterns

    Although it uses nascent browser technologies, each process itself is not so complicated. Some key points are described below.

    Image capture by getUserMedia()

    getUserMedia() is a function for capturing video/audio from webcam/microphone devices. It is a part of WebRTC and a fairly new feature in web browsers. Note that the user’s permission is required to get the image from the WebCam. If we were just displaying the WebCam image on the screen, it would be trivially easy. However, we want to access the image’s raw pixel data in JavaScript for more processing, so we need to use canvas and the createImageData() function.

    Because pixel-by-pixel processing is needed later in this application, the captured image’s resolution is reduced to 400 x 200px; that means one rhythm grid is 50 x 50 px in the rhythm pattern matrix.

    Note: Though most recent laptops/notebooks have embedded WebCams, you will get the best results on Whiteboard Drum from an external camera, because the camera needs to be precisely aimed at the picture on the whiteboard. Also, the selection of input from multiple available devices/cameras is not standardized currently, and cannot be controlled in JavaScript. In Firefox, it is selectable in the permission dialog when connecting, and it can be set up from “contents setup” option of the setup screen in Google Chrome.

    Get the WebCam video

    We don’t want to show these parts of the processing on the screen, so first we hide the video:

    <video id="video" style="display:none"></video>

    Now to grab the video:

    video = document.getElementById("video");
    navigator.getUserMedia=navigator.getUserMedia||navigator.webkitGetUserMedia||navigator.mozGetUserMedia;
    navigator.getUserMedia({"video":true},
        function(stream) {
            video.src= window.URL.createObjectURL(stream);
            video.play();
        },
        function(err) {
            alert("Camera Error");
        });

    Capture it and get pixel values

    We also hide the canvas:

    <canvas id="capture" width=400 height=200 style="display:none"></canvas>

    Then capture our video data on the canvas:

    function Capture() {
        ctxcapture.drawImage(video,0,0,400,200);
        imgdatcapture=ctxcapture.getImageData(0,0,400,200);
    }

    The video from the WebCam will be drawn onto the canvas at periodic intervals.

    Image analyzing

    Next, we need to get the 400 x 200 pixel values with getImageData(). The analyzing phase analyses the 400 x 200 image data in an 8 x 4 matrix rhythm pattern, where a single matrix grid is 50 x 50 px. All necessary input data is stored in the imgdatcapture.data array in RGBA format, 4 elements per pixel.

    var pixarray = imgdatcapture.data;
    var step;
    for(var x = 0; x < 8; ++x) {
        var px = x * 50;
        if(invert)
            step=7-x;
        else
            step=x;
        for(var y = 0; y < 4; ++y) {
            var py = y * 50;
            var lum = 0;
            var red = 0;
            for(var dx = 0; dx < 50; ++dx) {
                for(var dy = 0; dy < 50; ++dy) {
                    var offset = ((py + dy) * 400 + px + dx)*4;
                    lum += pixarray[offset] * 3 + pixarray[offset+1] * 6 + pixarray[offset+2];
                    red += (pixarray[offset]-pixarray[offset+2]);
                }
            }
            if(lum < lumthresh) {
                if(red > redthresh)
                    rhythmpat[step][y]=2;
                else
                    rhythmpat[step][y]=1;
            }
            else
                rhythmpat[step][y]=0;
        }
    }

    This is honest pixel-by-pixel analysis of grid-by-grid loops. In this implementation, the analysis is done for the luminance and redness. If the grid is “dark”, the grid is activated; if it is red, it should be accented.

    The luminance calculation uses a simplified matrix — R * 3 + G * 6 + B — that will get ten times the value – meaning – which means getting the value of range 0 to 2550 for each pixel. And the redness R – B is a experimental value because all that is required is a decision of Red or Blue. The result is stored in the rhythmpat array, with a value of 0 for nothing, 1 for blue or 2 for red.

    Sound generation through the Web Audio API

    Because the Web Audio API is a very cutting edge technology, it is not yet supported by every web browser. Currently, Google Chrome/Safari/Webkit-based Opera and Firefox (25 or later) support this API. Note: Firefox 25 is the latest version released at the end of October.

    For other web browsers, I have developed a polyfill that falls back to Flash: WAAPISim, available on GitHub. It provides almost all functions of the Web Audio API to unsupported browsers, for example Internet Explorer.

    Web Audio API is a large scale specification, but in our case, the sound generation part requires just a very simple use of the Web Audio API: load one sound for each instrument and trigger them at the right times. First we create an audio context, taking care of vendor prefixes in the process. Prefixes currently used are webkit or no prefix.

    audioctx = new (window.AudioContext||window.webkitAudioContext)();

    Next we load sounds to buffers via XMLHttpRequest. In this case, different sounds for each instrument (bd.wav / sd.wav / hh.wav / cy.wav) are loaded into the buffers array:

    var buffers = [];
    var req = new XMLHttpRequest();
    var loadidx = 0;
    var files = [
        "samples/bd.wav",
        "samples/sd.wav",
        "samples/hh.wav",
        "samples/cy.wav"
    ];
    function LoadBuffers() {
        req.open("GET", files[loadidx], true);
        req.responseType = "arraybuffer";
        req.onload = function() {
            if(req.response) {
                audioctx.decodeAudioData(req.response,function(b){
                    buffers[loadidx]=b;
                    if(++loadidx < files.length)
                        LoadBuffers();
                },function(){});
            }
        };
        req.send();
    }

    The Web Audio API generates sounds by routing graphs of nodes. Whiteboard Drum uses a simple graph that is accessed via AudioBufferSourceNode and GainNode. AudioBufferSourceNode play back the AudioBuffer and route to destination(output) directly (for normal *blue* sound), or route to destination via the GainNode (for accented *red* sound). Because the AudioBufferSourceNode can be used just once, it will be newly created for each trigger.

    Preparing the GainNode as the output point for accented sounds is done like this.

    gain=audioctx.createGain();
        gain.gain.value=2;
        gain.connect(audioctx.destination);

    And the trigger function looks like so:

    function Trigger(instrument,accent,when) {
        var src=audioctx.createBufferSource();
        src.buffer=buffers[instrument];
        if(accent)
            src.connect(gain);
        else
            src.connect(audioctx.destination);
        src.start(when);
    }

    All that is left to discuss is the accuracy of the playback timing, according to the rhythm pattern. Though it would be simple to keep creating the triggers with a setInterval() timer, it is not recommended. The timing can be easily messed up by any CPU load.

    To get accurate timing, using the time management system embedded in the Web Audio API is recommended. It calculates the when argument of the Trigger() function above.

    // console.log(nexttick-audioctx.currentTime);
    while(nexttick - audioctx.currentTime < 0.3) {
        var p = rhythmpat[step];
        for(var i = 0; i < 4; ++i)
            Trigger(i, p[i], nexttick);
        if(++step >= 8)
            step = 0;
        nexttick += deltatick;
    }

    In Whiteboard Drum, this code controls the core of the functionality. nexttick contains the accurate time (in seconds) of the next step, while audioctx.currentTime is the accurate current time (again, in seconds). Thus, this routine is getting triggered every 300ms – meaning look ahead to 300ms in the future (triggered in advance while nextticktime – currenttime < 0.3). The commented console.log will print the timing margin. While this routine is called periodically, the timing is collapsed if this value is negative.

    For more detail, here is a helpful document: A Tale of Two Clocks – Scheduling Web Audio with Precision

    About the UI

    Especially in music production software like DAW or VST plugins, the UI is important. Web applications do not have to emulate this exactly, but something similar would be a good idea. Fortunately, the very handy WebComponent library webaudio-controls is available, allowing us to define knobs or sliders with just a single HTML tag.

    NOTE: webaudio-controls uses Polymer.js, which sometimes has stability issues, causing unexpected behavior once in a while, especially when combining it with complex APIs.

    Future work

    This is already an interesting application, but it can be improved further. Obviously the camera position adjustment is an issue. Analysis can be more smarter if it has an auto adjustment of the position (using some kind of marker?), and adaptive color detection. Sound generation could also be improved, with more instruments, more steps and more sound effects.

    How about a challenge?

    Whiteboard Drum is available at http://www.g200kg.com/whiteboarddrum/, and the code is on GitHub.

    Have a play with it and see what rhythms you can create!

  3. Firefox Developer Tools: Episode 27 – Edit as HTML, Codemirror & more

    Firefox 27 was just uplifted to the Aurora release channel which means we are back to report on new features in Firefox Developer Tools. Below are just some of the new features, you can also take a look at all bugs resolved in DevTools for this release).

    JS Debugger: Break on DOM Events

    You can now automatically break on a variety of DOM events, without needing to manually set a breakpoint. To do this, click on the “Expand Panes” button on the top right of the debugger panel (right next to the search box). Then flip over to the events tab. Click on an event name to automatically pause the next time it happens. This will only show events that currently have listeners bound from your code. If you click on one of the headings, like “Mouse” or “Keyboard”, then all of the relevant events will be selected.

    Inspector improvements

    We’ve listened to feedback from web developers and made a number of enhancements to the Inspector:

    Edit as HTML

    Now in the inspector, you can right click on an element and open up an editor that allows you to set the outerHTML on an element.

    Select default color format

    You now have an option to select the default color format in the option panel:

    Color swatch previews

    The Developer Tools now offer color swatch previews that show up in the rule view:

    Image previews for background image urls

    Highly requested, we now offer image previews for background image URLs:

    In addition to above improvements, Mutated DOM elements are now highlighted in the Inspector.

    Keep an eye out for more tooltips coming soon, and feel free to chime in if you have any others you’d like to see!

    Codemirror

    Codemirror is a popular HTML5-based code editor component used on web sites. It is customizable and theme-able. The Firefox Devtools now use CodeMirror in various places: Style editor, Debugger, Inspector (Edit as HTML) and Scratchpad.

    From the Option panel, the user can select which theme to use (dark or light).

    WebConsole: Reflow Logging

    When the layout is invalidated (CSS or DOM changes), gecko needs to re-compute the position of some nodes. This computation doesn’t happen immediatly. It’s triggered for various reasons. For example, if you do “node.clientTop”, gecko needs to do this computation. This computation is called a “reflow”. Reflows are expensive. Avoiding reflows is important for responsiveness.

    To enable reflow logging, check the “Log” option under the “CSS” menu in the Console tab. Now, everytime a reflow happens, a log will be printed with the name of the JS function that triggered this reflow (if caused by JS).

    That’s all for this time. Hope you like the new improvements!

  4. Songs of Diridum: Pushing the Web Audio API to Its Limits

    When we at Goo Technologies heard that the Web Audio API would be supported in an upcoming version of Mozilla Firefox, we immediately started brainstorming about what we could build with that.

    We started discussing the project with the game developers behind “Legend of Diridum” (see below) and came up with the idea of a small market place and a small jazz band on a stage. The feeling we wanted to capture was that of a market place coming to life. The band is playing a song to warm up and the crowd has not gathered around yet. The evening is warm and the party is about to start.

    We call the resulting demo “Songs of Diridum”.

    What the Web Audio API can do for you

    We will take a brief look at the web audio system from three perspectives. These are game design, audio engineering and programming.

    From a game designer perspective we can use the functionality of the Web Audio API to tune the soundscape of our game. We can run a whole lot of separate sounds simultaneously while also adjusting their character to fit an environment or a game mechanic. You can have muffled sounds coming through a closed door and open the filters for these sounds to unmuffle them gradually as the door opens. In real time. We can add reflecting sounds of the environment to the footsteps of my character as we walk from a sidewalk into a church. The ambient sounds of the street will be echoing around in the church together with my footsteps. We can attach the sounds of a roaring fire to my magicians fireball, hurl it away and hear the fireball moving towards its target. We can hear the siren of a police car approaching and hear how it passes by from the pitch shift known as doppler effect. And we know we can use these features without needing to manage the production of an audio engine. Its already there and it works.

    From an audio engineering perspective we view the Web Audio API as a big patch bay with a slew of outboard gear tape stations and mixers. On a low level we feel reasonably comfortable with the fundamental aspects of the system. We can work comfortably with changing the volume of a sound while it is playing without running the risk of inducing digital distortion from the volume level changing from one sample to another. The system will make the interpolation needed for this type of adjustment. We can also build the type of effects we want and hook them up however we want. As long as we keep my system reasonably small we can make a nice studio with the Web Audio API.

    From a programmer perspective we can write the code needed for our project with ease. If we run into a problem we will usually find a good solution to it on the web. We don’t have to spend our time with learning how to work with some poorly documented proprietary audio engine. The type of problem we will be working with the most is probably related to how the code is structured. We will be figuring out how to handle the loading of the sounds and which sounds to load when. How to provide these sounds to the game designer through some suitable data structure or other design pipelines. We will also work with the team to figure out how to handle the budgeting of the sound system. How much data can we use? How many sounds can we play at the same time? How many effects can we use on the target platform? It is likely that the hardest problem, the biggest technical risk, is related to handling the diversity of hardware and browsers running on the web.

    About Legend of Diridum

    This sound demo called “Songs of Diridum” is actually a special demo based on graphics and setting from the upcoming game “LOD: Legend of Diridum”. The LOD team is led by the game design veteran Michael Stenmark.

    LOD is an easy to learn, user-friendly fantasy role playing game set on top of a so called sandbox world. It is a mix of Japanese fantasy and roleplaying games drawing inspiration from Grandia, Final Fantasy, Zelda and games like Animal Crossing and Minecraft.

    The game is set in a huge fantasy world, in the aftermath of a terrible magic war. The world is haunted by the monsters, ghosts and undead that was part of the warlocks armies and the player starts the game as the Empire’s official ghost hunter to cleanse the lands of the evil and keep the people of Diridum safe. LOD is built in the Goo Engine and can be played in almost any web browser without the need download anything.

    About the music

    The song was important to set the mood and to embrace the warm feeling of a hot summer night turning into a party. Adam Hagstrand, the composer, nailed it right away. We love the way he got it very laid back, jazzy. Just that kind of tune a band would warm up with before the crowd arrives.

    Quickly building a 3D game with Goo Engine

    We love the web, and we love HTML5. HTML5 runs in the browser on multiple devices and does not need any special client software to be downloaded and installed. This allows for games to be published on nearly every conceivable web site, and since it runs directly in the browser, it opens up unprecedented viral opportunities and social media integration.

    We wanted to build Songs of Diridum as a HTML5 browser game, but how to do that in 3D? The answer was WebGL. WebGL is a new standard in HTML5 that allows games to gain access to hardware acceleration, just like native games. The introduction of WebGL in HTML5 is a massive leap in what the browser can deliver and it allows for web games to be built with previously unseen graphics quality. WebGL powered HTML5 does not require that much bandwidth during gameplay. Since the game assets are downloaded (pre-cached) before and during gameplay, even modest speed internet connections suffice.

    But building a WebGL game from scratch is a massive undertaking. The Goo Platform from Goo Technologies is the solution for making it much easier to build WebGL games and applications. In November, Goo Create is released making it even more accessible and easy to use.

    Goo is a HTML5 and WebGL based graphics development platform capable of powering the next generation of web games and apps. From the ground up, it’s been built for amazing graphics smoothness and performance while at the same time making things easy for graphics creators. Since it’s HTML5, it enables creators to build and distribute advanced interactive graphics on the web without the need for special browser plugins or software downloads. With Goo you have the power to publish hardware accelerated games & apps on desktops, laptops, smart TVs, tablets or mobile devices. It gives instant access to smooth rich graphics and previously unimagined browser game play.

    UPDATE: Goo Technologies has just launched their interactive 3D editor, Goo Create, which radically simplifies the process of creating interactive WebGL graphics.

    Building Songs of Diridum

    We built this demo project with a rather small team working for a relatively short time. In total we have had about seven or so people involved with the production. Most team members have done sporadic updates to our data and code. Roughly speaking we have not been following any conventional development process but rather tried to get as good a result as we can without going into any bigger scale production.

    The programming of the demo has two distinct sides. One for building the world and one for wiring up the sound system. Since the user interface primarily is used for controlling the state of the sound system we let the sound system drive the user interface. It’s a simple approach but we also have a relatively simple problem to solve. Building a small 3D world like this one is mostly a matter of loading model data to various positions in 3D space. All the low level programming needed to get the scene to render with the proper colors and shapes is handled by the Goo Engine so we have not had to write any code for those parts.

    We defined a simple data format for adding model data to the scene, we also included the ability to add sounds to the world and some slightly more complex systems such as the animated models and the bubbly water sound effect.

    The little mixer panel in which you can play around with to change the mix of the jazz band is dynamically generated by the sound system:

    Since we expected this app to be used on touch screen devices we also decided to only use clickable buttons for interface. We would not have enough time to test the usability of any other type of controls when aiming at a diffuse collection of mobile devices.

    Using Web Audio in a 3D world

    To build the soundscape of a 3D world we have access to spatialization of sound sources and the ears of the player. The spatial aspects of sounds and listeners are boiled down to position, direction and velocity. Each sound can also be made to emit its sound in a directional cone, in short this emulates the difference between the front and back of loudspeaker. A piano would not really need any directionality as it sounds quite similar in all directions. A megaphone on the other hand is comparably directional and should be louder at the front than at the back.

    if (is3dSource) {
        // 3D sound source
        this.panNode = this.context.createPanner();
        this.gainNode.connect(this.panNode);
        this.lastPos = [0,0,0];
        this.panNode.rolloffFactor = 0.5;
    } else {
        // Stereo sound source “hack”
        this.panNode = mixNodeFactory.buildStereoChannelSplitter(this.gainNode, context);
        this.panNode.setPosition(0, this.context.currentTime);
    }
    

    The position of the sound is used to determine panning or which speaker the sound is louder in and how loud the sound should be.

    The velocity of the sound and the listener together with their positions provide the information needed to doppler shift all sound sources in the world accordingly. For other worldly effects such as muffling sounds behind a door or changing the sound of the room with reverbs we’ll have to write some code and configure processing nodes to meet with our desired results.

    For adding sounds to the user interface and such direct effects, we can hook the sounds up without going through the spatialization, which makes them a bit simpler. You can still process the effects and be creative if you like. Perhaps pan the sound source based on where the mouse pointer clicked.

    Initializing Web Audio

    Setting up for using Web Audio is quite simple. The tricky parts are related to figuring out how much sound you need to preload and how much you feel comfortable with loading later. You also need to take into account that loading a sound contains two asynchronous and potentially slow operations. The first is the download, the second is the decompression from some small format such as OGG or MP3 to arraybuffer. When developing against a local machine you’ll find that the decompression is a lot slower than the download and as with download speeds in general we can expect to not know how much time this will require for any given user.

    Playing sound streams

    Once you have a sound decompressed and ready it can be used to create a sound source. A sound source is a relatively low level object which streams its sound data at some speed to its selected target node. For the simplest system this target node is the speaker output. Even with this primitive system, you can already manipulate the playback rate of the sound, this changes its pitch and duration. There is a nice feature in the Web Audio API which allows you to adjust the behaviour of interpolating a change like this to fit your desires.

    Adding sound effects: reverb, delay, and distortion

    To add an effect to our simple system you put a processor node between the source and the speakers. us audio engineers wants to split the source to have a “dry” and a “wet” component at this point. The dry component is the non-processed sound and the wet is processed. Then you’ll send both the wet and the dry to the speakers and adjust the mix between them by adding a gain node on each of these tracks. Gain is an engineery way of saying “volume”. You can keep on like this and add nodes between the source and the speakers as you please. Sometimes you’ll want effects in parallel and sometimes you want them serially. When coding your system its probably a good idea to make it easy to change how this wiring is hooked up for any given node.

    Conclusion

    We’re quite happy with how “Songs of Diridum” turned out, and we’re impressed with the audio and 3D performance available in today’s web browsers. Hopefully the mobile devices will catch up soon performance-wise, and HTML5 will become the most versatile cross-platform game environment available.
    Now go play “Songs of Diridum”!

  5. Introducing TogetherJS

    What is TogetherJS?

    We’d like to introduce TogetherJS, a real-time collaboration tool out of Mozilla Labs.

    TogetherJS is a service you add to an existing website to add real-time collaboration features. Using the tool two or more visitors on a website or web application can see each other’s mouse/cursor position, clicks, track each other’s browsing, edit forms together, watch videos together, and chat via audio and WebRTC.

    Some of the features TogetherJS includes:

    • See the other person’s cursor and clicks
    • See scroll position
    • Watch the pages a person visits on a page
    • Text chat
    • Audio chat using WebRTC
    • Form field synchronization (text fields, checkboxes, etc)
    • Play/pause/track videos in sync
    • Continue sessions across multiple pages on a site

    How to integrate

    Many of TogetherJS’s features require no modification of your site. TogetherJS looks at the DOM and determines much of what it should do that way – it detects the form fields, detects some editors like CodeMirror and Ace, and injects its toolbar into your page.

    All that’s required to try TogetherJS out is to add this to your page:

    <script src="https://togetherjs.com/togetherjs.js"></script>

    And then create a button for your users to start TogetherJS:

    <button id="collaborate" type="button">Collaborate</button>
    <script>
    document.getElementById("collaborate")
      .addEventListener("click", TogetherJS, false);
    </script>

    If you want to see some of what TogetherJS does, jsFiddle has enabled TogetherJS:

    jsfiddle with Collaborate highlighted

    Just click on Collaboration and it will start TogetherJS. You can also use TogetherJS in your fiddles, as we’ll show below.

    Extending for your app

    TogetherJS can figure out some things by looking at the DOM, but it can’t synchronize your JavaScript application. For instance, if you have a list of items in your application that is updated through JavaScript, that list won’t automatically be in sync for both users. Sometimes people expect (or at least hope) that it will automatically update, but even if we did synchronize the DOM across both pages, we can’t synchronize your underlying JavaScript objects. Unlike products like Firebase or the Google Drive Realtime API TogetherJS does not give you realtime persistence – your persistence and the functionality of your site is left up to you, we just synchronize sessions in the browser itself.

    So if you have a rich JavaScript application you will have to write some extra code to keep sessions in sync. We do try to make it easier, though!

    To give an example we’d like to use a simple drawing application. We’ve published the complete example as a fiddle which you can fork and play with yourself.

    A Very Small Drawing Application

    We start with a very simple drawing program. We have a simple canvas:

    <canvas id="sketch" 
            style="height: 400px; width: 400px; border: 1px solid #000">
    </canvas>

    And then some setup:

    // get the canvas element and its context
    var canvas = document.querySelector('#sketch');
    var context = canvas.getContext('2d');
     
    // brush settings
    context.lineWidth = 2;
    context.lineJoin = 'round';
    context.lineCap = 'round';
    context.strokeStyle = '#000';

    We’ll use mousedown and mouseup events on the canvas to register our move() handler for the mousemove event:

    var lastMouse = {
      x: 0,
      y: 0
    };
     
    // attach the mousedown, mousemove, mouseup event listeners.
    canvas.addEventListener('mousedown', function (e) {
        lastMouse = {
            x: e.pageX - this.offsetLeft,
            y: e.pageY - this.offsetTop
        };
        canvas.addEventListener('mousemove', move, false);
    }, false);
     
    canvas.addEventListener('mouseup', function () {
        canvas.removeEventListener('mousemove', move, false);
    }, false);

    And then the move() function will figure out the line that needs to be drawn:

    function move(e) {
        var mouse = {
            x: e.pageX - this.offsetLeft,
            y: e.pageY - this.offsetTop
        };
        draw(lastMouse, mouse);
        lastMouse = mouse;
    }

    And lastly a function to draw lines:

    function draw(start, end) {
        context.beginPath();
        context.moveTo(start.x, start.y);
        context.lineTo(end.x, end.y);
        context.closePath();
        context.stroke();
    }

    This is enough code to give us a very simple drawing application. At this point if you enable TogetherJS on this application you will see the other person move around and see their mouse cursor and clicks, but you won’t see drawing. Let’s fix that!

    Adding TogetherJS

    TogetherJS has a “hub” that echoes messages between everyone in the session. It doesn’t interpret messages, and everyone’s messages travel back and forth, including messages that come from a person that might be on another page. TogetherJS also lets the application send their own messages like:

    TogetherJS.send({
      type: "message-type", 
      ...any other attributes you want to send...
    })

    to send a message (every message must have a type), and to listen:

    TogetherJS.hub.on("message-type", function (msg) {
      if (! msg.sameUrl) {
        // Usually you'll test for this to discard messages that came
        // from a user at a different page
        return;
      }
    });

    The message types are namespaced so that your application messages won’t accidentally overlap with TogetherJS’s own messages.

    To synchronize drawing we’d want to watch for any lines being drawn and send those to the other peers:

    function move(e) {
        var mouse = {
            x: e.pageX - this.offsetLeft,
            y: e.pageY - this.offsetTop
        };
        draw(lastMouse, mouse);
        if (TogetherJS.running) {
            TogetherJS.send({type: "draw", start: lastMouse end: mouse});
        }
        lastMouse = mouse;
    }

    Before we send we check that TogetherJS is actually running (TogetherJS.running). The message we send should be self-explanatory.

    Next we have to listen for the messages:

    TogetherJS.hub.on("draw", function (msg) {
        if (! msg.sameUrl) {
            return;
        }
        draw(msg.start, msg.end);
    });

    We don’t have to worry about whether TogetherJS is running when we register this listener, it can only be called when TogetherJS is running.

    This is enough to make our drawing live and collaborative. But there’s one thing we’re missing: if I start drawing an image, and you join me, you’ll only see the new lines I draw, you won’t see the image I’ve already drawn.

    To handle this we’ll listen for the togetherjs.hello message, which is the message each client sends when it first arrives at a new page. When we see that message we’ll send the other person an image of our canvas:

    TogetherJS.hub.on("togetherjs.hello", function (msg) {
        if (! msg.sameUrl) {
            return;
        }
        var image = canvas.toDataURL("image/png");
        TogetherJS.send({
            type: "init",
            image: image
        });
    });

    Now we just have to listen for this new init message:

    TogetherJS.hub.on("init", function (msg) {
        if (! msg.sameUrl) {
            return;
        }
        var image = new Image();
        image.src = msg.image;
        context.drawImage(image, 0, 0);
    });

    With just a few lines of code TogetherJS let us make a live drawing application. Of course we had to do some of the code, but here’s some of the things TogetherJS handles for us:

    • Gives users a URL to share with another user to start the session Screenshot of invitation window
    • Establishes a WebSocket connection to our hub server, which echoes messages back and forth between clients
    • Let’s users set their name and avatar, and see who else is in the session Screenshot of avatar/name setting
    • Keeps track of who is available, who has left, and who is idle
    • Simple but necessary features like text chat are available Screenshot of chat window
    • Session initialization and tracking is handled by TogetherJS

    Some of the things we didn’t do in this example:

    • We used a fixed-size canvas so that we didn’t have to deal with two clients and two different resolutions. Generally TogetherJS handles different kinds of clients and using resolution-independent positioning (and even works with responsive design). One approach to fix this might be to ensure a fixed aspect ratio, and then use percentages of the height/width for all the drawing positions.
    • We don’t have any fun drawing tools! Probably you wouldn’t want to synchronize the tools themselves – if I’m drawing with a red brush, there’s no reason you can’t be drawing with a green brush at the same time.
    • But something like clearing the canvas should be synchronized.
    • We don’t save or load any drawings. Once the drawing application has save and load you may have to think more about what you want to synchronize. If I have created a picture, saved it, and then return to the site to join your session, will your image overwrite mine? Putting each image at a unique URL will make it clearer whose image everyone is intending to edit.

    Want To Look At More?

    • Curious about the architecture of TogetherJS? Read the technology overview.
    • Try TogetherJS out on jsFiddle
    • Find us via the button in the documentation: “Get Live Help” which will ask to start a TogetherJS session with one of us.
    • Find us on IRC in #togetherjs on irc.mozilla.org.
    • Find the code on GitHub, and please open an issue if you see a bug or have a feature request. Don’t be shy, we are interested in lots of kinds of feedback via issues: ideas, potential use cases (and challenges coming from those use cases), questions that don’t seem to be answered via our documentation (each of which also implies a bug in our documentation), telling us about potentially synergistic applications.
    • Follow us on Twitter: @togetherjs.

    What kind of sites would you like to see TogetherJS on? We’d love to hear in the comments.

  6. Introducing the Firefox OS App Manager

    The Firefox OS App Manager is a new developer tool available in Firefox 26 that greatly improves the process of building and debugging Firefox OS apps, either in the Simulator or on a connected device. Based on the the Firefox OS Simulator add-on, it bridges the gap between existing Firefox Developer tools and the Firefox OS Simulator, allowing developers to fully debug and deploy their web apps to Firefox OS with ease.

    AppManager

    Additional features made available in Firefox 26 are discussed in this post.

    App Manager In Action

    The App Manager replaces the current Simulator Dashboard and provides an integrated debug and deployment environment for your Firefox OS apps, by leveraging the existing Firefox Developer Tools. You can install hosted or packaged apps and debug them in the Simulator or with a connected device. The App Manager also provides additional information to the developer including the current Firefox OS version of a connected device, the ability to take screenshots, a list of all currently installed apps and a list of all the APIs and what privilege level is required to use each. Here is a screencast of the App Manager showing off some of the features available for Firefox OS Development.

    In addition to debugging your own apps, the App Manager also provides the ability to update, start, stop and debug system level apps. Debugging an app using the Developer Tools is similar to debugging any other Web app and changes made in the Tools are automatically reflected real-time to the app either in the Simulator or the connected device. You can use the Console to see warnings and errors within the app, the Inspector to view and modify the currently loaded HTML and CSS, or debug your JavaScript using the Debugger.

    Developer Tools

    For more information about using the Developer Tools be sure to check out the Developer Tools series on this blog and for the most up to date information take a look at the Developer Tools section of MDN.

    Getting Started with the App Manager

    To get started using the App Manager take a look at the MDN Documentation on Using the The App Manager. Do keep in mind that to see what is shown above you need:

    • Firefox 26 or later
    • Firefox OS 1.2 or later
    • at least the 1.2 version of the Firefox OS Simulator
    • either the ADB SDK or the ADB Helper Add-on

    Details for these are described in the above MDN link.

    Mozilla is very interested in your feedback as that is the best way to make useful tools, so please be sure to reach out to us through Bugzilla or in the comments and let us know what you think about the new App Manager.

  7. Firefox Developer Tools and Firebug

    If you haven’t tried the Firefox Developer Tools in the last 6 months, you owe it to yourself to take another look. Grab the latest Aurora browser, and start up the tools from the Web Developer menu (a submenu of Tools on some platforms).

    The tools have improved a lot lately: black-boxing lets you treat sources as system libraries that won’t distract your debugging flow. Source maps let you debug source generated by transpilers or minimizers. The inspector has paint flashing, a new font panel, and a greatly improved style inspector with tab completion and pseudo-elements. The network monitor helps you debug your network activity. The list goes on, and you can read more about recent developments in our series of Aurora Posts.

    After getting to know the tools, start the App Manager. Install the Firefox OS Simulator to see how your app will behave on a device. If you have a Firefox OS device running the latest 1.2 nightlies, you can connect the tools directly to the phone.

    Why The Built-in Tools?

    The Web owes a lot to Firebug. For a long time, Firebug was the best game in town. It introduced the world to visual highlighting of the DOM, inplace editing of styles, and the console API.

    Before the release of Firefox 4 we decided that Firefox needed a set of high-quality built-in tools. Baking them into the browser let us take advantage of the existing Mozilla community and infrastructure, and building in mozilla-central makes a huge difference when working with Gecko and Spidermonkey developers. We had ambitious platform changes planned: The JSD API that Firebug uses to debug Javascript had aged badly, and we wanted to co-evolve the tools alongside a new Spidermonkey Debugger API.

    We thought long and hard about including Firebug wholesale and considered several approaches to integrating it. An early prototype of the Inspector even included a significant portion of Firebug. Ultimately, integration proved to be too challenging and would have required rewrites that would have been equivalent to starting over.

    How is Firebug Doing?

    Firebug isn’t standing still. The Firebug Working Group continues to improve it, as you can see in their latest 1.12 release. Firebug is working hard to move from JSD to the new Debugger API, to reap the performance and stability benefits we added for the Firefox Developer Tools.

    After that? Jan Odvarko, the Firebug project leader, had this to say:

    Firebug has always been maintained rather as an independent project outside of existing processes and Firefox environment while DevTools is a Mozilla in-house project using standard procedures. Note that the Firebug goal has historically been to complement Firefox features and not compete with them (Firebug is an extension after all) and we want to keep this direction by making Firebug a unique tool.

    Everyone wants to figure out the best way for Firebug’s community of users, developers, and extension authors to shape and complement the Firefox Developer Tools. The Firebug team is actively discussing their strategy here, but hasn’t decided how they want to accomplish that.

    Follow the Firebug blog and @firebugnews account to get involved.

    What’s Next for Firefox Developer Tools?

    We have more exciting stuff coming down the pipe. Some of this will be new feature work, including great performance analysis and WebGL tools. Much of it will be responding to feedback, especially from developers giving the tools a first try.

    We also want to find out what you can add to the tools. Recently the Ember.js Chrome Extension was ported to Firefox Developer Tools as a Sunday hack, but we know there are more ideas out there. Like most Firefox code you can usually find a way to do what you want, but we’re also working to define and publish a Developer Tools API. We want to help developers build high quality, performant developer tools extensions. We’d love to hear from developers writing extensions for Firebug or Chrome Devtools about how to best go about that.

    Otherwise, keep following the Hacks blog to learn more about how the Firefox Developer Tools are evolving. Join in at the dev-developer-tools mailing list, the @FirefoxDevTools Twitter account, or #devtools on irc.mozilla.org.

  8. Reintroducing the Firefox Developer Tools, part 1: the Web Console and the JavaScript Debugger

    This is part one, out of 5, focusing on the built-in Developer Tools in Firefox, their features and where we are now with them. The intention is to show you all the possibilities available, the progress and what we are aiming for.

    Firefox 4 saw the launch of the Web Console, the first of the new developer tools built into Firefox. Since then we’ve been adding more capabilities to the developer tools, which now perform a wide range of functions and can be used to debug and analyze web applications on desktop Firefox, Firefox OS, and Firefox for Android.

    cola1

    This is the first in a series of posts in which we’ll look at where the developer tools have got to since Firefox 4. We’ll present each tool in a short screencast, then wrap things up with a couple of screencasts illustrating specific workflow patterns that should help you get the most of the developer tools. These will include scenarios such as mobile development, and altering and debugging CSS based applications, etc.

    In this first post we present the latest Web Console and JavaScript Debugger.

    Web Console

    The Web Console is primarily used to display information associated with the currently loaded web page. This includes HTML, CSS, JavaScript, and Security warnings and errors. In addition network requests are displayed and the console indicates whether they succeeded or failed. When warnings and errors are detected the Web Console also offers a link to the line of code which caused the problem. Often the Web Console is the first stop in debugging a Web Application that is not performing correctly.

    webconsole

    The Web Console also lets you execute JavaScript within the context of the page. This means you can inspect objects defined by the page, execute functions within the scope of the page, and access specific elements using CSS selectors. The following screencast presents an overview of the Web Console’s features.

    Take a look at the MDN Web Console documentation for more information.

    JavaScript Debugger

    The JavaScript Debugger is used for debugging and refining JavaScript that your Web Application is currently using. The Debugger can be used to debug code running in Firefox OS and Firefox for Android, as well as Firefox Desktop. It’s a full-featured debugger providing watch expressions, scoped variables, breakpoints, conditional expressions, step-through, step-over and run to end functionality. In addition you can change the values of variables at run time while the debugger has paused your application.

    debugger

    The following screencast illustrates some of the features of the JavaScript Debugger.

    For more information on the JavaScript Debugger, see the MDN Debugger documentation.

    Learn More

    These screencasts give a quick introduction to the main features of these tools. For the full details on all of the developer tools, check out the full MDN Tools documentation.

    Coming Up

    In the next post in this series we will be delving into the Style Editor and the Scratchpad. Please give us your feedback on what features you would like to see explained in more detail within the comments.

  9. Firefox 24 for Android gets WebRTC support by default

    WebRTC is now on Firefox for Android as well as Firefox Desktop! Firefox 24 for Android now supports mozGetUserMedia, mozRTCPeerConnection, and DataChannels by default. mozGetUserMedia has been in desktop releases since Firefox 20, and mozPeerConnection and DataChannels since Firefox 22, and we’re excited that Android is now joining Desktop releases in supporting these cool new features!

    What you can do

    With WebRTC enabled, developers can:

    • Capture camera or microphone streams directly from Firefox Android using only JavaScript (a feature we know developers have been wanting for a while!),
    • Make browser to browser calls (audio and/or video) which you can test with sites like appspot.apprtc.com, and
    • Share data (no server in the middle) to enable peer-to-peer apps (e.g. text chat, gaming, image sharing especially during calls)

    We’re eager to see the ideas developers come up with!

    For early adopters and feedback

    Our support is still largely intended for developers and for early adopters at this stage to give us feedback. The working group specs are not complete, and we still have more features to implement and quality improvements to make. We are also primarily focused now on making 1:1 (person-to-person) calling solid — in contrast to multi-person calling, which we’ll focus on later. We welcome your testing and experimentation. Please give us feedback, file bug reports and start building new applications based on these new abilities.

    If you’re not sure where to start, please start by reading some of the WebRTC articles on Hacks that have already been published. In particular, please check out WebRTC and the Early API, The Making of Face to GIF, and PeerSquared as well as An AR Game (which won our getUserMedia Dev Derby) and WebRTC Experiments & Demos.

    An example of simple video frame capture (which will capture new images at approximately 15fps):

    navigator.getUserMedia({video: true, audio: false}, yes, no);
    video.src = URL.createObjectURL(stream);
     
    setInterval(function () {
      context.drawImage(video, 0,0, width,height);
      frames.push(context.getImageData(0,0, width,height));
    }, 67);

    Snippet of code taken from “Multi-person video chat” on nightly-gupshup (you can try it in the WebRTC Test Landing Page — full code is on GitHub)

    function acceptCall(offer) {
        log("Incoming call with offer " + offer);
     
        navigator.mozGetUserMedia({video:true, audio:true}, function(stream) {
        document.getElementById("localvideo").mozSrcObject = stream;
        document.getElementById("localvideo").play();
        document.getElementById("localvideo").muted = true;
     
        var pc = new mozRTCPeerConnection();
        pc.addStream(stream);
     
        pc.onaddstream = function(obj) {
          document.getElementById("remotevideo").mozSrcObject = obj.stream;
          document.getElementById("remotevideo").play();
        };
     
        pc.setRemoteDescription(new mozRTCSessionDescription(JSON.parse(offer.offer)), function() {
          log("setRemoteDescription, creating answer");
          pc.createAnswer(function(answer) {
            pc.setLocalDescription(answer, function() {
              // Send answer to remote end.
              log("created Answer and setLocalDescription " + JSON.stringify(answer));
              peerc = pc;
              jQuery.post(
                "answer", {
                  to: offer.from,
                  from: offer.to,
                  answer: JSON.stringify(answer)
                },
                function() { console.log("Answer sent!"); }
              ).error(error);
            }, error);
          }, error);
        }, error);
      }, error);
    }
     
    function initiateCall(user) {
        navigator.mozGetUserMedia({video:true, audio:true}, function(stream) {
        document.getElementById("localvideo").mozSrcObject = stream;
        document.getElementById("localvideo").play();
        document.getElementById("localvideo").muted = true;
     
        var pc = new mozRTCPeerConnection();
        pc.addStream(stream);
     
        pc.onaddstream = function(obj) {
          log("Got onaddstream of type " + obj.type);
          document.getElementById("remotevideo").mozSrcObject = obj.stream;
          document.getElementById("remotevideo").play();
        };
     
        pc.createOffer(function(offer) {
          log("Created offer" + JSON.stringify(offer));
          pc.setLocalDescription(offer, function() {
            // Send offer to remote end.
            log("setLocalDescription, sending to remote");
            peerc = pc;
            jQuery.post(
              "offer", {
                to: user,
                from: document.getElementById("user").innerHTML,
                offer: JSON.stringify(offer)
              },
              function() { console.log("Offer sent!"); }
            ).error(error);
          }, error);
        }, error);
      }, error);
    }

    Any code that runs on Desktop should run on Android. (Ah, the beauty of HTML5!) However, you may want to optimize for Android knowing that it could now be used on a smaller screen device and even rotated.

    This is still a hard-hat area, especially for mobile. We’ve tested our Android support of 1:1 calling with a number of major WebRTC sites, including talky.io, apprtc.appspot.com, and codeshare.io.

    Known issues

    • Echo cancellation needs improvement; for calls we suggest a headset (Bug 916331)
    • Occasionally there are audio/video sync issues or excessive audio delay. We already have a fix in Firefox 25 that will improve delay (Bug 884365).
    • On some devices there are intermittent video-capture crashes; we’re actively investigating (Bug 902431).
    • Lower-end devices or devices with poor connectivity may have problems decoding or sending higher-resolution video at good frame rates.

    Please help us bring real-time communications to the web: build your apps, give us your feedback, report bugs, and help us test and develop. With your help, your ideas, and your enthusiasm, we will rock the web to a whole new level.

  10. CSS Length Explained

    When styling a web site with CSS you might have realised that an inch on a screen is not an actual inch, and a pixel is not necessarily an actual pixel. Have you ever figured out how to represent the speed of light in CSS pixels? In this post, we will explore the definition of CSS length units starting by understanding some of the physical units with the same name, in the style of C.G.P. Grey[1].

    The industrial inch (in)

    People who live in places where the inch is a common measure are already familiar with the physical unit. For the rest of us living in places using the metric system, since 1933, the “industrial inch” has been defined as mathematical equivalent of 2.54 centimeters, or 0.0254 metres.

    The device pixel

    Computer screens display things in pixels. The single physical “light blob” on the display, capable of displaying the full color independent of it’s neighbour is called a pixel (picture element). In this post, we refer to the physical pixel on the screen as “device pixel” (not to be confused with CSS pixel which will be explained later on).

    Display pixel density, dots per inch (DPI), or pixels per inch (ppi)

    The physical dimension of a device pixel on a specific device can be derived from the display pixel density given by the device manufacturer, usually in dots per inch (DPI), or pixels per inch (PPI). Both units are essentially talking about the same thing when they refer to a screen display, where DPI is the commonly-used-but-incorrect unit and PPI is the more-accurate-but-no-one-cares one. The physical dimension of a device pixel is simply the inverse number of it’s DPI.

    The MacBook Air (2011) I am currently using comes with a 125 DPI display, so

    (width or height of one device pixel) = 1/125 inch = 0.008 inch = 0.02032 cm

    Obviously this is a number too small to be printed on the specification, so the DPI stays.

    The CSS pixel (px)

    The dimension of a CSS pixel can be roughly regarded as a size to be seen comfortably by the naked human eye, not too small so you’d have to squint, and not big enough for you to see pixelation. Instead of consulting your ophthalmologist on the definition of “seen comfortably”, the W3C CSS specification gives us a recommend reference:

    The reference pixel is the visual angle of one pixel on a device with a pixel density of 96 DPI and a distance from the reader of an arm’s length.

    The proper dimension of a CSS pixel is actually dependent on the distance between you and the display. With the exception of Google Glass (which mounts on your head), people usually find their own unique comfortable distance, dependent on their eyesight, for a particular device. Given the fact we have no way to tell whether the user is visually impaired, what concerns us is simply typical viewing distance for the given device form factor — e.g., a mobile phone is usually held closer, and a laptop is usually used on a desk or a lap. Thus, the “viewing distance” of mobile phones is shorter than the one of laptops, or desktop computers.

    The viewing distance

    As previously mentioned, the viewing distance varies from person to person and from device to device, which is why we must categorise devices into form factors. The recommended reference viewing distance (“an arm’s length”) and the reference pixel density (“96 DPI”) is actually historical; it is a testimony of the way people during the late-20th century usually accessed the web:

    The first computer that runs the web. From Wikipedia: WWW

    The first computer that runs the web. From Wikipedia: WWW.

    For day-to-day devices of the 21st century, we have different reference recommendations:

    Baseline pixel density

    Width/height of one CSS pixel

    Viewing distance

    A 20th century PC with CRT display

    96 DPI

    ~0.2646 mm (1/96in)

    28 in (71.12cm)

    Modern laptop with LCD[2]

    125 DPI

    0.2032 mm (1/125in)

    21.5 in (54.61cm)

    Smartphones/Tablets[3]

    160 DPI

    ~0.159mm (1/160 in)

    16.8in (42.672cm)

    From this table, it’s pretty easy to see that as the pixel density increases and size of a CSS pixel gets smaller, muggles like you and me usually need to hold the device closer to comfortably see what’s on the device screen.

    With that, we established a basic fact in the world of CSS: a CSS pixel will be displayed in different physical dimensions but it will always be displayed in the correct size in which the viewer will find comfortable. By leveraging the principle, we can safely set the basic dimensions (e.g. base font size) to a fixed pixel size, independent of device form factors.

    CSS inch (in)

    On a computer screen, a CSS inch has nothing to do with the physical inch. Instead, it is being redefined to be exactly equal to 96 CSS pixels. This has resulted in an awkward situation, where you can never reliably draw an accurate ruler on the screen with basic CSS units[4]. Yet this gives us what’s intended: elements sized in CSS units will always be displayed across devices in a way where the user will feel comfortable.

    As a side note, if the user prints out the page on a piece of paper, browsers will map the CSS inch to the physical inch. You can reliably draw an accurate ruler with CSS and print it out. (make sure you’d turn off “scale to fit” in the printer settings!)

    Device pixel ratio (DPPX)

    As we step into the future (where is my flying car?), many of the smartphones nowadays are shipped with high-density displays. In order to make sure that CSS pixels are sized consistently across every device that accesses the web (i.e. everything with a screen and network connection), device manufacturers had to map multiple device pixels to one CSS pixel to make up for it’s relative bigger physical size. The ratio of the dimension of CSS pixel relative to device pixels is the device pixel ratio (DPPX).

    Let’s take iPhone 4 as the most famous example. It comes with a 326 DPI display. According to our table above, as a smartphone, it’s typical viewing distance is 16.8 inches and it’s baseline pixel density is 160 DPI. To create one CSS pixel, Apple chose to set the device pixel ratio to 2, which effectively makes iOS Safari display web pages in the same way as it would on a 163 DPI phone.

    Before we move on, take a look back on the numbers above. We can actually do better by not setting device pixel ratio to 2 but to 326/160 = 2.0375, and make a CSS pixel exactly the same compared to the reference dimensions. Unfortunately, such ratio will result in an unintended consequence: as each CSS pixel is not being displayed by whole device pixels, the browser would have to make some efforts to anti-alias all the bitmap images, borders, etc. since almost always they are defined as whole CSS pixels. It’s hard for browsers to utilize 2.0375 device pixels to draw your 1 CSS pixel-wide border: it’s way more easier to do it if the ratio is simply 2.

    Incidentally, 163 DPI happens to be the pixel density of the previous generation of the iPhone, so the web will work the same way without having developers to do any special “upgrades” to their websites.

    Device manufacturers usually choose 1.5, or 2, or other whole numbers as the DPPX value. Occasionally, some devices decided not to play nice and shipped with something like 1.325 DPPX; as web developers, we should probably ignore those devices.

    Firefox OS, initially being a mobile phone operating system, implemented DPPX calculation in this way. The actual DPPX will be determined by the manufacturer of each shipping device.

    CSS point (pt)

    Point is a commonly used unit that came from the typography industry, as the unit for metal typesetting. As the world gradually moved from letterpress printing to desktop publishing, a “PostScript point” was then redefined as 1/72 inches. CSS follows the same convention and mapped 1 CSS point to 1/72 CSS inches, and 96/72 CSS pixels.

    You can easily see that just like a CSS inch, on a device display, CSS point has little to do with the traditional unit. Its size only matches it’s desktop publishing counterpart when we actually print out the web page.

    CSS pica (pc), CSS centimeter (cm), CSS millimeter (mm)

    Just like CSS inch, while their relative relationships are kept, their basic size on the screen have been redefined by CSS pixel, instead of the standard SI unit (metre), which is defined by the speed of light, a universal constant.

    We could literally redefine the speed of light in CSS; it is 1,133,073,857,007.87 CSS pixels per second[5] — relativity in CSS makes light travel a bit slower on devices with smaller form factors than traditional PCs, from our perspective, looking into the screen from the real world.

    The viewport meta tag

    Though the smartphone is handy in the palm of your hand and its guaranteed that CSS pixels will be displayed in a size comfortable to users, a device capable of showing only a part of a fixed-width desktop website is not going to be very useful. It would be equally not useful if the phone violated the CSS unit rules and pretended it to be something else.

    The introduction of the viewport meta tag brought the best of both worlds to mobile devices by giving the control of page scaling to both users and web developers. You could design a mobile layout, and opt out of viewport scaling, or, you could leave your website made for desktop browsers as-is and have the mobile browser scale down the page for you and the user. As always, detailed descriptions and usage can be found on Mozilla Developer Network.

    Conclusion

    Browser vendors, while being in competition, recognize the effort of maintaining the stability of the Web platform and coordinate their feature sets through a standard organization. Features and APIs exposed will be carefully tested for their usefulness across all scenarios, before declaring their suitability as a standard. The definition of CSS pixel has one of those since the beginning. New features introduced must maintain backward-compatibility instead of changing the old behavior[6], so many of them (device pixels, viewport meta tag, etc.) are being introduced as extra layers of complexity. Old web pages that use standardized features, thus have a built-in “forward-compatibility”.

    With that in mind, Mozilla, together with our partners, nurture and defend the Open Web — the unique platform that we all cherish.


    [1] Well, not actually, cause we are not going to make a video about it. I won’t mind if C.G.P. Grey actually made a video about this!

    [2] Appears to be common values for laptops[citation needed].

    [3] The typical value is documented here as a “mdpi” Android device.

    [4] With one exception: the non-standard CSS unit mozmm gives you the ability to do so provided that Firefox knows the pixel density it runs on. This is out of scope of our topic there.

    [5] 299,792,458 metres per second ÷ 0.0254 meters per inch x 96 pixels per inch

    [6] There was a brief period of time where people tried to come up with an entirely new standard that breaks backward compatibility (*cough* xHTML *cough*), but that’s a story for another time.