Mozilla

Articles

Sort by:

View:

  1. What do you want from your DevTools?

    Every tool starts with an idea: “if I had this, then I could do that faster, better, or more easily.” The Firefox Developer Tools are built atop hundreds of those ideas, and many of them come from people in the web development community just like you.

    We collect those ideas in our UserVoice forum, and use your votes to help prioritize our work.

    Last April, @hopefulcyborg requested the ability to “Connect Firefox Developer Tools to [Insert Browser Here].” The following November, we launched Firefox Developer Edition which includes a preliminary version of just that: the ability to debug Chrome and Safari, even on mobile, from Firefox on your desktop. We called it Valence.

    Good ideas don’t have to be big or even particularly innovative as long as they help you accomplish your goals more effectively. For instance, all of the following features also came directly from feedback on our UserVoice forum:

    The color picker introduced in Firefox 31

    The color picker introduced in Firefox 31

    Though somewhat small in scope, each of those ideas made the DevTools more efficient, more helpful, and more usable. And that’s our mission: we make Firefox for you, and we want you to know that we’re listening. If you have ideas for the Firefox Developer Tools, submit them on UserVoice. If you don’t have ideas, vote on the ones that are already there. The right people will see it, and it will help us build the best tools for you.

    If you want to go a step further and contribute to the tools yourself, we’re always happy to help people get involved. After all, Mozilla was founded on the open source ideal of empowering communities to collaborate and improve the Internet for all of us.

    These are your tools. How could they be better?

  2. WebRTC in Firefox 38: Multistream and renegotiation

    Building on the JSEP (Javascript Session Establishment Protocol) engine rewrite introduced in 37, Firefox 38 now has support for multistream (multiple tracks of the same type in a single PeerConnection), and renegotiation (multiple offer/answer exchanges in a single PeerConnection). As usual with such things, there are caveats and limitations, but the functionality seems to be pretty solid.

    Multistream and renegotiation features

    Why are these things useful, you ask? For example, now you can handle a group video call with a single PeerConnection (multistream), and do things like add/remove these streams on the fly (renegotiation). You can also add screensharing to an existing video call without needing a separate PeerConnection. Here are some advantages of this new functionality:

    • Simplifies your job as an app-writer
    • Requires fewer rounds of ICE (Interactive Connectivity Establishment – the protocol for establishing connection between the browsers), and reduces call establishment time
    • Requires fewer ports, both on the browser and on TURN relays (if using bundle, which is enabled by default)

    Now, there are very few WebRTC services that use multistream (the way it is currently specified, see below) or renegotiation. This means that real-world testing of these features is extremely limited, and there will probably be bugs. If you are working with these features, and are having difficulty, do not hesitate to ask questions in IRC at irc.mozilla.org on #media, since this helps us find these bugs.

    Also, it is important to note that Google Chrome’s current implementation of multistream is not going to be interoperable; this is because Chrome has not yet implemented the specification for multistream (called “unified plan” – check on their progress in the Google Chromium Bug tracker). Instead they are still using an older Google proposal (called “plan B”). These two approaches are mutually incompatible.

    On a related note, if you maintain or use a WebRTC gateway that supports multistream, odds are good that it uses “plan B” as well, and will need to be updated. This is a good time to start implementing unified plan support. (Check the Appendix below for examples.)

    Building a simple WebRTC video call page

    So let’s start with a concrete example. We are going to build a simple WebRTC video call page that allows the user to add screen sharing during the call. As we are going to dive deep quickly you might want to check out our earlier Hacks article, WebRTC and the Early API, to learn the basics.

    First we need two PeerConnections:

    pc1 = new mozRTCPeerConnection();
    pc2 = new mozRTCPeerConnection();

    Then we request access to camera and microphone and attach the resulting stream to the first PeerConnection:

    let videoConstraints = {audio: true, video: true};
    navigator.mediaDevices.getUserMedia(videoConstraints)
      .then(stream1) {
        pc1.addStream(stream1);
      });

    To keep things simple we want to be able to run the call just on one machine. But most computers today don’t have two cameras and/or microphones available. And just having a one-way call is not very exciting. So let’s use a built-in testing feature of Firefox for the other direction:

    let fakeVideoConstraints = {video: true, fake: true };
    navigator.mediaDevices.getUserMedia(fakeVideoConstraints)
      .then(stream2) {
        pc2.addStream(stream2);
      });

    Note: You’ll want to call this part from within the success callback of the first getUserMedia() call so that you don’t have to track with boolean flags if both getUserMedia() calls succeeded before you proceed to the next step.
    Firefox also has a built-in fake audio source (which you can turn on like this {audio: true, fake: true}). But listening to an 8kHz tone is not as pleasant as looking at the changing color of the fake video source.

    Now we have all the pieces ready to create the initial offer:

    pc1.createOffer().then(step1, failed);

    Now the WebRTC typical offer – answer flow follows:

    function step1(offer) {
      pc1_offer = offer;
      pc1.setLocalDescription(offer).then(step2, failed);
    }
     
    function step2() {
      pc2.setRemoteDescription(pc1_offer).then(step3, failed);
    }

    For this example we take a shortcut: Instead of passing the signaling message through an actual signaling relay, we simply pass the information into both PeerConnections as they are both locally available on the same page. Refer to our previous hacks article WebRTC and the Early API for a solution which actually uses FireBase as relay instead to connect two browsers.

    function step3() {
      pc2.createAnswer().then(step4, failed);
    }
     
    function step4(answer) {
      pc2_answer = answer;
      pc2.setLocalDescription(answer).then(step5, failed);
    }
     
    function step5() {
      pc1.setRemoteDescription(pc2_answer).then(step6, failed);
    }
     
    function step6() {
      log("Signaling is done");
    }

    The one remaining piece is to connect the remote videos once we receive them.

    pc1.onaddstream = function(obj) {
      pc1video.mozSrcObject = obj.stream;
    }

    Add a similar clone of this for our PeerConnection 2. Keep in mind that these callback functions are super trivial — they assume we only ever receive a single stream and only have a single video player to connect it. The example will get a little more complicated once we add the screen sharing.

    With this we should be able to establish a simple call with audio and video from the real devices getting sent from PeerConnection 1 to PeerConnection 2 and in the opposite direction a fake video stream that shows slowly changing colors.

    Implementing screen sharing

    Now let’s get to the real meat and add screen sharing to the already established call.

    function screenShare() {
      let screenConstraints = {video: {mediaSource: "screen"}};
     
      navigator.mediaDevices.getUserMedia(screenConstraints)
        .then(stream) {
          stream.getTracks().forEach(track) {
            screenStream = stream;
            screenSenders.push(pc1.addTrack(track, stream));
          });
        });
    }

    Two things are required to get screen sharing working:

    1. Only pages loaded over HTTPS are allowed to request screen sharing.
    2. You need to append your domain to the user preference  media.getusermedia.screensharing.allowed_domains in about:config to whitelist it for screen sharing.

    For the screenConstraints you can also use ‘window‘ or ‘application‘ instead of ‘screen‘ if you want to share less than the whole screen.
    We are using getTracks() here to fetch and store the video track out of the stream we get from the getUserMedia call, because we need to remember the track later when we want to be able to remove screen sharing from the call. Alternatively, in this case you could use the addStream() function used before to add new streams to a PeerConnection. But the addTrack() function gives you more flexibility if you want to handle video and audio tracks differently, for instance. In that case, you can fetch these tracks separately via the getAudioTracks() and getVideoTracks() functions instead of using the getTracks() function.

    Once you add a stream or track to an established PeerConnection this needs to be signaled to the other side of the connection. To kick that off, the onnegotiationneeded callback will be invoked. So your callback should be setup before adding a track or stream. The beauty here — from this point on we can simply re-use our signaling call chain. So the resulting screen share function looks like this:

    function screenShare() {
      let screenConstraints = {video: {mediaSource: "screen"}};
     
      pc1.onnegotiationneeded = function (event) {
        pc1.createOffer(step1, failed);
      };
     
      navigator.mediaDevices.getUserMedia(screenConstraints)
        .then(stream) {
          stream.getTracks().forEach(track) {
            screenStream = stream;
            screenSenders.push(pc1.addTrack(track, stream));
          });
        });
    }

    Now the receiving side also needs to learn that the stream from the screen sharing was successfully established. We need to slightly modify our initial onaddstream function for that:

    pc2.onaddstream = function(obj) {
      var stream = obj.stream;
      if (stream.getAudioTracks().length == 0) {
        pc3video.mozSrcObject = obj.stream;
      } else {
        pc2video.mozSrcObject = obj.stream;
      }
    }

    The important thing to note here: With multistream and renegotiation onaddstream can and will be called multiple times. In our little example onaddstream is called the first time we establish the connection and PeerConnection 2 starts receiving the audio and video from the real devices. And then it is called a second time when the video stream from the screen sharing is added.
    We are simply assuming here that the screen share will have no audio track in it to distinguish the two cases. There are probably cleaner ways to do this.

    Please refer to the Appendix for a bit more detail on what happens here under the hood.

    As the user probably does not want to share his/her screen until the end of the call let’s add a function to remove it as well.

    function stopScreenShare() {
      screenStream.stop();
      screenSenders.forEach(sender) {
        pc1.removeTrack(sender);
      });
    }

    We are holding on to a reference to the original stream to be able to call stop() on it to release the getUserMedia permission we got from the user. The addTrack() call in our screenShare() function returned us an RTCRtpSender object, which we are storing so we can hand it to the removeTrack() function.

    All of the code combined with some extra syntactic sugar can be found on our MultiStream test page.

    If you are going to build something which allows both ends of the call to add screen share, a more realistic scenario than our demo, you will need to handle special cases. For example, multiple users might accidentally try to add another stream (e.g. the screen share) exactly at the same time and you may end up with a new corner-case for renegotiation called “glare.” This is what happens when both ends of the WebRTC session decide to send new offers at the same time. We do not yet support the “rollback” session description type that can be used to recover from glare (see Jsep draft and the Firefox bug). Probably the best interim solution to prevent glare is to announce via your signaling channel that the user did something which is going to kick off another round of renegotiation. Then, wait for the okay from the far end before you call createOffer() locally.

    Appendix

    This is an example renegotiation offer SDP from Firefox 39 when adding the screen share:

    v=0
    o=mozilla...THIS_IS_SDPARTA-39.0a1 7832380118043521940 1 IN IP4 0.0.0.0
    s=-
    t=0 0
    a=fingerprint:sha-256 4B:31:DA:18:68:AA:76:A9:C9:A7:45:4D:3A:B3:61:E9:A9:5F:DE:63:3A:98:7C:E5:34:E4:A5:B6:95:C6:F2:E1
    a=group:BUNDLE sdparta_0 sdparta_1 sdparta_2
    a=ice-options:trickle
    a=msid-semantic:WMS *
    m=audio 9 RTP/SAVPF 109 9 0 8
    c=IN IP4 0.0.0.0
    a=candidate:0 1 UDP 2130379007 10.252.26.177 62583 typ host
    a=candidate:1 1 UDP 1694236671 63.245.221.32 54687 typ srflx raddr 10.252.26.177 rport 62583
    a=sendrecv
    a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
    a=ice-pwd:3aefa1a552633717497bdff7158dd4a1
    a=ice-ufrag:730b2351
    a=mid:sdparta_0
    a=msid:{d57d3917-64e9-4f49-adfb-b049d165c312} {920e9ffc-728e-0d40-a1b9-ebd0025c860a}
    a=rtcp-mux
    a=rtpmap:109 opus/48000/2
    a=rtpmap:9 G722/8000/1
    a=rtpmap:0 PCMU/8000
    a=rtpmap:8 PCMA/8000
    a=setup:actpass
    a=ssrc:323910839 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}
    m=video 9 RTP/SAVPF 120
    c=IN IP4 0.0.0.0
    a=candidate:0 1 UDP 2130379007 10.252.26.177 62583 typ host
    a=candidate:1 1 UDP 1694236671 63.245.221.32 54687 typ srflx raddr 10.252.26.177 rport 62583
    a=sendrecv
    a=fmtp:120 max-fs=12288;max-fr=60
    a=ice-pwd:3aefa1a552633717497bdff7158dd4a1
    a=ice-ufrag:730b2351
    a=mid:sdparta_1
    a=msid:{d57d3917-64e9-4f49-adfb-b049d165c312} {35eeb34f-f89c-3946-8e5e-2d5abd38c5a5}
    a=rtcp-fb:120 nack
    a=rtcp-fb:120 nack pli
    a=rtcp-fb:120 ccm fir
    a=rtcp-mux
    a=rtpmap:120 VP8/90000
    a=setup:actpass
    a=ssrc:2917595157 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}
    m=video 9 RTP/SAVPF 120
    c=IN IP4 0.0.0.0
    a=sendrecv
    a=fmtp:120 max-fs=12288;max-fr=60
    a=ice-pwd:3aefa1a552633717497bdff7158dd4a1
    a=ice-ufrag:730b2351
    a=mid:sdparta_2
    a=msid:{3a2bfe17-c65d-364a-af14-415d90bb9f52} {aa7a4ca4-189b-504a-9748-5c22bc7a6c4f}
    a=rtcp-fb:120 nack
    a=rtcp-fb:120 nack pli
    a=rtcp-fb:120 ccm fir
    a=rtcp-mux
    a=rtpmap:120 VP8/90000
    a=setup:actpass
    a=ssrc:2325911938 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}

    Note that each track gets its own m-section, denoted by the msid attribute.

    As you can see from the BUNDLE attribute, Firefox offers to put the new video stream, with its different msid value, into the same bundled transport. That means if the answerer agrees we can start sending the video stream over the already established transport. We don’t have to go through another ICE and DTLS round. And in case of TURN servers we save another relay resource.

    Hypothetically, this is what the previous offer would look like if it used plan B (as Chrome does):

    v=0
    o=mozilla...THIS_IS_SDPARTA-39.0a1 7832380118043521940 1 IN IP4 0.0.0.0
    s=-
    t=0 0
    a=fingerprint:sha-256 4B:31:DA:18:68:AA:76:A9:C9:A7:45:4D:3A:B3:61:E9:A9:5F:DE:63:3A:98:7C:E5:34:E4:A5:B6:95:C6:F2:E1
    a=group:BUNDLE sdparta_0 sdparta_1
    a=ice-options:trickle
    a=msid-semantic:WMS *
    m=audio 9 RTP/SAVPF 109 9 0 8
    c=IN IP4 0.0.0.0
    a=candidate:0 1 UDP 2130379007 10.252.26.177 62583 typ host
    a=candidate:1 1 UDP 1694236671 63.245.221.32 54687 typ srflx raddr 10.252.26.177 rport 62583
    a=sendrecv
    a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
    a=ice-pwd:3aefa1a552633717497bdff7158dd4a1
    a=ice-ufrag:730b2351
    a=mid:sdparta_0
    a=rtcp-mux
    a=rtpmap:109 opus/48000/2
    a=rtpmap:9 G722/8000/1
    a=rtpmap:0 PCMU/8000
    a=rtpmap:8 PCMA/8000
    a=setup:actpass
    a=ssrc:323910839 msid:{d57d3917-64e9-4f49-adfb-b049d165c312} {920e9ffc-728e-0d40-a1b9-ebd0025c860a}
    a=ssrc:323910839 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}
    m=video 9 RTP/SAVPF 120
    c=IN IP4 0.0.0.0
    a=candidate:0 1 UDP 2130379007 10.252.26.177 62583 typ host
    a=candidate:1 1 UDP 1694236671 63.245.221.32 54687 typ srflx raddr 10.252.26.177 rport 62583
    a=sendrecv
    a=fmtp:120 max-fs=12288;max-fr=60
    a=ice-pwd:3aefa1a552633717497bdff7158dd4a1
    a=ice-ufrag:730b2351
    a=mid:sdparta_1
    a=rtcp-fb:120 nack
    a=rtcp-fb:120 nack pli
    a=rtcp-fb:120 ccm fir
    a=rtcp-mux
    a=rtpmap:120 VP8/90000
    a=setup:actpass
    a=ssrc:2917595157 msid:{d57d3917-64e9-4f49-adfb-b049d165c312} {35eeb34f-f89c-3946-8e5e-2d5abd38c5a5}
    a=ssrc:2917595157 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}
    a=ssrc:2325911938 msid:{3a2bfe17-c65d-364a-af14-415d90bb9f52} {aa7a4ca4-189b-504a-9748-5c22bc7a6c4f}
    a=ssrc:2325911938 cname:{72b9ff9f-4d8a-5244-b19a-bd9b47251770}

    Note that there is only one video m-section, with two different msids, which are part of the ssrc attributes rather than in their own a lines (these are called “source-level” attributes). Continued…

  3. Understanding the CSS box model for inline elements

    In a web page, every element is rendered as a rectangular box. The box model describes how the element’s content, padding, border, and margin determine the space occupied by the element and its relation to other elements in the page.

    Depending on the element’s display property, its box may be one of two types: a block box or an inline box. The box model is applied differently to these two types. In this article we’ll see how the box model is applied to inline boxes, and how a new feature in the Firefox Developer Tools can help you visualize the box model for inline elements.

    Inline boxes and line boxes

    Inline boxes are laid out horizontally in a box called a line box:

    2-inline-boxes

    If there isn’t enough horizontal space to fit all elements into a single line, another line box is created under the first one. A single inline element may then be split across lines:

    line-boxes

    The box model for inline boxes

    When an inline box is split across more than one line, it’s still logically a single box. This means that any horizontal padding, border, or margin is only applied to the start of the first line occupied by the box, and the end of the last line.

    For example, in the screenshot below, the highlighted span is split across 2 lines. Its horizontal padding doesn’t apply to the end of the first line or the beginning of the second line:

    horizontal-padding

    Also, any vertical padding, border, or margin applied to an element will not push away elements above or below it:

    vertical-padding

    However, note that vertical padding and border are still applied, and the padding still pushes out the border:

    vertical-border-padding

    If you need to adjust the height between lines, use line-height instead.

    Inspecting inline elements with devtools

    When debugging layout issues, it’s important to have tools that give you the complete picture. One of these tools is the highlighter, which all browsers include in their devtools. You can use it to select elements for closer inspection, but it also gives you information about layout.

    layout-info

    In the example above, the highlighter in Firefox 39 is used to highlight an inline element split across several lines.

    The highlighter displays guides to help you align elements, gives the complete dimensions of the node, and displays an overlay showing the box model.

    From Firefox 39 onwards, the box model overlay for split inline elements shows each individual line occupied by the element. In this example the content region is displayed in light blue and is split over four lines. A right padding is also defined for the node, and the highlighter shows the padding region in purple.

    Highlighting each individual line box is important for understanding how the box model is applied to inline elements, and also helps you check the space between lines and the vertical alignment of inline boxes.

     

  4. How to make a browser app for Firefox OS

    Firefox OS is an operating system built on top of the Firefox web browser engine, which is called Gecko. A browser app on Firefox OS provides a user interface written with HTML5 technology and manages web page browsing using the Browser API. It also manages tabbing, browsing history, bookmarks, and so on depending on the implementation.

    While Firefox OS already includes a browser, you can use the browser API to create your own browser or add browser functionality to your app. This article shows you how to build a browser app for Firefox OS devices. Following the steps, you’ll get a basic Firefox OS browser app with an address bar and back/forward buttons to browse web pages.

    The source code can be downloaded at https://github.com/begeeben/firefox-os-browser-sample.

    Prerequisites: WebIDE

    The WebIDE is available with Firefox 34 or later. A Firefox OS device is not required to develop, build or test a browser app. By using the WebIDE, we can easily bootstrap a web app, make HTML/CSS/JS modifications and run the app on one of the Firefox OS simulators. To open the WebIDE within Firefox, select Tools > Web Developer > WebIDE from the top menu:

    open webide

    Create an app from the template

    First, bootstrap a web app using the WebIDE empty template. The app type needs to be privileged so the browser permission can be set. This permission is required in order to use the Browser API. The Browser API provides additional methods and events to manage iframes.

    new priviledged app

    Below is a screenshot of what the app should look like. Next we’ll start to add code to make a simple browser app.

    mybrowser empty

    Edit the manifest.webapp

    Let’s start to make some code changes. The template has already set the app type to privileged in the manifest.webapp.

      "type": "privileged",

    To allow our app to use the Browser API, we need to specify the ‘browser‘ permission.

      "permissions": {
        "browser": {}
      },

    HTML structure

    The browser app we’re building will have a toolbar on top and a div as the iframe container which displays ‘Hello myBrowser!’ by default. The browser iframe will be created later using JavaScript. We could have added the iframe to the HTML, but in this example a new browser iframe will be created dynamically.

    <!-- browser toolbar -->
    <div id="toolbar">
        <button id="goback-button" type="button" disabled><</button>
        <button id="goforward-button" type="button" disabled>></button>
        <!-- url bar displays page url or page title -->
        <form id="url-bar">
            <input id="url-input" type="text"></input>
        </form>
        <button id="url-button" form="url-bar">GO</button>
    </div>
    <!-- the container for the browser frame -->
    <div id="frame-container" class="empty">
        <div id="empty-frame">
            <h1>Hello myBrowser!</h1>
        </div>
    </div>

    Below is a screenshot of the simple browser app:

    browser startup

    Manage the browser iframe

    The browser app has to handle mozbrowser events which are accessible when the app has the browser permission. In order to separate the UI and mozbrowser event handling and preserve the flexibility of supporting multiple tabs in the future, we have added a tab.js file. Following is the code in tab.js which creates an iframe with the ‘mozbrowser‘ attribute to enable the use of the Browser API.

    /**
     * Returns an iframe which runs in a child process with Browser API enabled
     * and fullscreen is allowed
     *
     * @param  {String} [url] Optional URL
     * @return {iframe}       An OOP mozbrowser iframe
     */
    function createIFrame (url) {
      var iframe = document.createElement('iframe');
      iframe.setAttribute('mozbrowser', true);
      iframe.setAttribute('mozallowfullscreen', true);
      iframe.setAttribute('remote', true);
     
      if (url) {
        iframe.src = url;
      }
     
      return iframe;
    }

    The mozallowfullscreen attribute enables the embedded web page of the iframe to use fullscreen mode. A web page can request fullscreen mode by calling Element.mozRequestFullscreen().

    The ‘remote‘ attribute separates the embedded iframe into another child process. This is needed for security reasons to prevent malicious web sites from compromising the browser app. Currently, the attribute does nothing in this sample browser app because Nested OOP has not been implemented. See Bug 1020135 - (nested-oop) [meta] Allow nested oop <iframe mozbrowser>.

    Next the ‘Tab‘ object constructor is added to tab.js. This constructor handles browser iframe creation and browser event handling. When a Tab is constructed, it creates a browser iframe and attaches the mozbrowser event listeners to it:

    /**
     * The browser tab constructor.
     *
     * Creates an iframe and attaches mozbrowser events for web browsing.
     *
     * Implements EventListener Interface.
     *
     * @param {String} url An optional plaintext URL
     */
    function Tab (url) {
      this.iframe = createIFrame(url);
      this.title = null;
      this.url = url;
     
      this.iframe.addEventListener('mozbrowserloadstart', this);
      this.iframe.addEventListener('mozbrowserlocationchange', this);
      this.iframe.addEventListener('mozbrowsertitlechange', this);
      this.iframe.addEventListener('mozbrowserloadend', this);
      this.iframe.addEventListener('mozbrowsererror', this);
    };

    There are additional mozbrowser events that are not used in this simple browser app. The events used in the sample provide access to useful information about the iframe, and our browser app uses them to provide web page details, like the title, URL, loading progress, contextmenu, etc.

    For example, to display the page title on the toolbar, we use the mozbrowsertitlechange event to retrieve the title of the web page. Once the title is updated, a CustomEvent 'tab:titlechange' with the Tab itself as the detail is dispatched to notify other components to update the title.

    Tab.prototype.mozbrowsertitlechange = function _mozbrowsertitlechange (e) {
      if (e.detail) {
        this.title = e.detail;
      }
     
      var event = new CustomEvent('tab:titlechange', { detail: this });
      window.dispatchEvent(event);
    };

    Code can now be added to the main app.js to handle this custom event, which updates the title.

    /**
     * Display the title of the currentTab on titlechange event.
     */
    window.addEventListener('tab:titlechange', function (e) {
      if (currentTab === e.detail) {
        urlInput.value = currentTab.title;
      }
    });

    Browse a web page on submit event

    Using the address bar as a search bar is a common and useful practice among browsers. When a user submits the input, first we check if it is a valid URL. If not, the input is regarded as a search term and it will be appended to a search engine URI. Note that the validation of the URL can be handled in multiple ways. The sample app uses an input element with its type attribute set to URL. A URL object could also have been used with the URL string passed as the first parameter to the constructor. A DOM exception will be thrown if the URL is not valid. This exception can then be caught and the URL string assumed to be a search term.

    /**
     * The default search engine URI
     *
     * @type {String}
     */
    var searchEngineUri = 'https://search.yahoo.com/search?p={searchTerms}';
     
    /**
     * Using an input element to check the validity of the input URL. If the input
     * is not valid, returns a search URL.
     *
     * @param  {String} input           A plaintext URL or search terms
     * @param  {String} searchEngineUri The search engine to be used
     * @return {String}                 A valid URL
     */
    function getUrlFromInput(input, searchEngineUri) {
      var urlValidate = document.createElement('input');
      urlValidate.setAttribute('type', 'url');
      urlValidate.setAttribute('value', input);
     
      if (!urlValidate.validity.valid) {
        var uri = searchEngineUri.replace('{searchTerms}', input);
        return uri;
      }
     
      return input;
    }

    Code can then be added to the main app.js to handle the submit button. This code checks to see if a currently active Tab object exists, and either loads the URL or reloads the URL if it has not changed. If one does not exist, it is created and the URL is loaded.

    /**
     * Check the input and browse the address with a Tab object on url submit.
     */
    window.addEventListener('submit', function (e) {
      e.preventDefault();
     
      if (!currentUrlInput.trim()) {
        return;
      }
     
      if (frameContainer.classList.contains('empty')) {
        frameContainer.classList.remove('empty');
      }
     
      var url = getUrlFromInput(currentUrlInput.trim(), searchEngineUri);
     
      if (!currentTab) {
        currentTab = new Tab(url);
        frameContainer.appendChild(currentTab.iframe);
      } else if (currentUrlInput === currentTab.title) {
        currentTab.reload();
      } else {
        currentTab.goToUrl(url);
      }
    });

    The currentTab.reload function uses the iframe.reload function provided by the Browser API to reload the page. The code below is added to the tab.js file to handle the reload.

    /**
     * Reload the current page.
     */
    Tab.prototype.reload = function _reload () {
      this.iframe.reload();
    };

    Enable/disable back and forward buttons

    Finally the Browser API can be used to check if the page can go backward or forward in the navigation history. The iframe.getCanGoBack method returns a DOMRequest object which indicates if the page can go backward in its onsuccess callback. We use a Promise to wrap this function for easier access. This code is added to the tab.js file.

    /**
     * Check if the iframe can go backward in the navigation history.
     *
     * @return {Promise} Resolve with true if it can go backward.
     */
    Tab.prototype.getCanGoBack = function _getCanGoBack () {
      var self = this;
     
      return new Promise(function (resolve, reject) {
        var request = self.iframe.getCanGoBack();
     
        request.onsuccess = function () {
          if (this.result) {
            resolve(true);
          } else {
            resolve(false);
          }
        };
      });
    };

    The check is performed in the app.js file when a page is loaded. This occurs when the iframe receives a mozbrowserloadend event.

    /**
     * Enable/disable goback and goforward buttons accordingly when the
     * currentTab is loaded.
     */
    window.addEventListener('tab:loadend', function (e) {
      if (currentTab === e.detail) {
        currentTab.getCanGoBack().then(function(canGoBack) {
          gobackButton.disabled = !canGoBack;
        });
     
        currentTab.getCanGoForward().then(function(canGoForward) {
          goforwardButton.disabled = !canGoForward;
        });
      }
    });

    Run the app

    One of the Firefox OS simulators can now be used to test the sample browser app.

    Firefox_WebIDE_installing_simulator

    On the top right of the WebIDE, click Select Runtime to choose a Firefox OS phone or simulator:

    Firefox_WebIDE_choose_simulator

    Once a Firefox OS runtime is selected and launched, you can install and run the app by clicking the play button within the WebIDE. See the WebIDE documentation for more information on building, debugging, and deploying Firefox OS apps.

    mybrowser finished

    Final thoughts

    There’s more functionality that can be added to this simple little browser to make it really useful including bookmarks, settings, search suggestions, and sharing. Feel free to expand on this code and have fun with it. Let us know what you think and be sure to spread the word about Firefox OS!

    Additional information

    https://hacks.mozilla.org/2014/08/building-the-firefox-browser-for-firefox-os/
    https://developer.mozilla.org/en-US/docs/Web/API/Using_the_Browser_API

  5. Using the Firefox DevTools to Debug fetch() on GitHub

    Firefox Nightly recently added preliminary support for Fetch, a modern, Promise-based replacement for XMLHttpRequest (XHR). Our initial work supported most of the Fetch Specification, but not quite all of it. Specifically, when Fetch first appeared in Nightly, we hadn’t yet implemented serializing and de-serializing of FormData objects.

    GitHub was already using Fetch in production with a home-grown polyfill, and required support for serializing FormData in order to upload images to GitHub Issues. Thus, when our early, incomplete implementation of Fetch landed in Nightly, the GitHub polyfill stepped out of the way, and image uploads from Firefox broke.

    In the 15-minute video below, Dan Callahan shows a real-world instance of using the Firefox Developer Tools to help find, file, and fix Bug 1143857: “Fetch does not serialize FormData body; breaks GitHub.” This isn’t a canned presentation, but rather a comprehensive, practical demonstration of actually debugging minified JavaScript and broken event handlers using the Firefox DevTools, reporting a Gecko bug in Bugzilla, and ultimately testing a patched build of Firefox.

    Use the following links to jump to a specific section of the video on YouTube:

    • 0:13 – The error
    • 0:50 – Using the Network Panel
    • 1:30 – Editing and Resending HTTP Requests
    • 2:02 – Hypothesis: FormData was coerced to a String, not serialized
    • 2:40 – Prettifying minified JavaScript
    • 3:10 – Setting breakpoints on event handlers
    • 4:57 – Navigating the call stack
    • 7:54 – Setting breakpoints on lines
    • 8:56 – GitHub’s FormData constructor
    • 10:48 – Invoking fetch()
    • 11:53 – Verifying the bug by testing fetch() on another domain
    • 12:52 – Checking the docs for fetch()
    • 13:42 – Filing a Gecko bug in Bugzilla
    • 14:42 – The lifecycle of Bug 1143857: New, Duplicate, Reopened, Resolved
    • 15:41 – Verifying a fixed build of Firefox

    We expect Firefox Developer Edition version 39 to ship later this month with full support for the Fetch API.

  6. An analytics primer for developers

    There are three kinds of lies: lies, damned lies, and statistics – Mark Twain

    Deciding what to track (all the things)

    When you are adding analytics to a system you should try to log everything. At some point in the future if you need to pull information out of a system it’s much better to have every piece of information to hand, rather than realising that you need some data that you don’t yet track. Here are some guidelines and suggestion for collecting and analysing information about how people are interacting with your website or app.

    Grouping your stats as a best practice

    Most analytics platforms allow you to tag an event with metadata. This lets you analyse stats against each other and makes it easier to compare elements in a user interaction.

    For example, if you are logging clicks on a menu, you could track each menu item differently e.g.:

    track("Home pressed");
    track("Cart pressed");
    track("Logout pressed");

    Doing this makes it harder to answer questions such as which button is most popular etc. Using metadata you can make most analytics platforms perform calculations like this for you:

    track("Menu pressed","Home button");
    track("Menu pressed","Cart button");
    track("Menu pressed","Logout button");

    The analytics above mean you now have a total of all menu presses, and you can find the most/least popular of the menu items with no extra effort.

    Optimising your funnel

    A conversion funnel is a term of art derived from a consumer marketing model. The metaphor of the funnel describes the flow of steps a user goes through as they engage more deeply with your software. Imagine you want to know how many users clicked log in and then paid at the checkout? If you track events such as “Checkout complete” and “User logged in” you can then ask your analytics platform what percentage of users did both within a certain time frame (a day for instance).

    Imagine the answer comes out to be 10%, this tells you useful information about the behaviour of your users (bear in mind this funnel is not order sensitive, i.e., it does not matter in which order the events happen in login -> cart -> pay or cart -> login -> pay). Thus, you can start to optimise parts of your app and use this value to determine whether or not you are converting more of your users to make a purchase or otherwise engage more deeply.

    Deciding what to measure

    Depending on your business, different stats will have different levels of importance. Here are some common stats of interest to developers of apps or online services:

    Number of sessions
    The total number of sessions on your product (the user opening your product, using it, then closing it = 1 session)
    Session length
    How long each session lasts (can be mode, mean, median)
    Retention
    How many people come back to your product having used it before (there are a variety of metrics such as rolling retention, 30 day retention etc)
    MAU
    Monthly active users: how may users use the app once a month
    DAU
    Daily active users: how may users use the app once a day
    ARPU
    Average revenue per user: how much money you make per person
    ATV
    Average transaction value: how much money you make per sale
    CAC
    Customer acquisition cost: how much it costs to get one extra user (normally specified by the channel for getting them)
    CLV
    Customer lifetime value: total profit made from a user (usually projected)
    Churn
    The number of people who leave your product in a given time (usually given as a percentage of total user base)
    Cycle time
    The the time it takes for one user to refer another

    Choosing an analytics tool or platform

    There are plenty of analytics providers, listed below are some of the best known and most widely used:

    Google Analytics

    Website
    Developer documentation

    Quick event log example:

    ga('send', 'event', 'button', 'click');

    Pros:

    • Free
    • Easy to set up

    Cons:

    • Steep learning curve for using the platform
    • Specialist training can be required to get the most out of the platform

    Single page apps:

    If you are making a single page app/website, you need to keep Google informed that the user is still on your page and hasn’t bounced (gone to your page/app and left without doing anything):

    ga('set' , 'page', location.pathname + location.search + location.hash);
    ga('send', 'pageview');

    Use the above code every time a user navigates to a new section of your app/website to let Google know the user is still browsing your site/app.

    Flurry

    Website
    Developer documentation

    Quick event log example:

    FlurryAgent.logEvent("Button clicked");
    FlurryAgent.logEvent("Button clicked",{more : 'data'});

    Pros:

    • Free
    • Easy to set up

    Cons:

    • Data normally 24 hours behind real time
    • Takes ages to load the data

    Mixpanel

    Website
    Developer documentation

    Quick event log example:

    mixpanel.track("Button clicked");
    mixpanel.track("Button clicked",{more : 'data'});

    Pros:

    • Free trial
    • Easy to set up
    • Real-time data

    Cons:

    • Gets expensive after the free trial
    • If you are tracking a lot of points, the interface can get cluttered

    Speeding up requests

    When you are loading an external JS file you want to do it asynchronously if possible to speed up the page load.

    <script type="text/javascript" async> ... </script>

    The above code will cause the JavaScript to load asynchronously but assumes the user has a browser supporting HTML5.

    //jQuery example
    $.getScript('https://cdn.flurry.com/js/flurry.js', 
    function(){
       ...
    });

    This code will load the JavaScript asynchronously with greater browser support.

    The next problem is that you could try to add an analytic even though the framework does not exist yet, so you need to check to see if the variable framework first:

    if(typeof FlurryAgent != "undefined"){
       ...
    }

    This will prevent errors and will also allow you to easily disable analytics during testing. (You can just stop the script from being loaded – and the variable will never be defined.)

    The problem here is that you might be missing analytics whilst waiting for the script to load. Instead, you can make a queue to store the events and then post them all when the script loads:

    var queue = [];
     
    if(typeof FlurryAgent != "undefined"){
       ...
    }else{
       queue.push(["data",{more : data}]);
    }
     
    ...
     
    //jQuery example
    $.getScript('https://cdn.flurry.com/js/flurry.js', 
    function(){
       ...
     
       for(var i = 0;i < queue.length;i++)
       {
          FlurryAgent.logEvent(queue[i][0],queue[i][1]);
       }
       queue = [];
    });

    Analytics for your Firefox App

    You can use any of the above providers above with Firefox OS, but remember when you paste a script into your code they generally are protocol agnostic: they start //myjs.com/analytics.js and you need to choose either http: or https:https://myjs.com/analytics.js (This is required only if you are making a packaged app.)

    Let us know how it goes.

  7. Optimising SVG images

    SVG is a vector image format based on XML. It has great advantages, most notably it is lightweight. Since SVG is a text format, it can be viewed and modified using a simple text editor, and applying GZIP compression produces excellent results.

    It’s critical for a website to provide assets that are as lightweight as possible, especially on mobile where bandwidth can be very limited. You want to optimise your SVG files to have your app load and display as quickly as possible.

    This article will show how to use dedicated tools to optimise SVG images. You will also learn how the markup works so you can go the extra mile to produce the lightest possible images.

    Introducing svgo

    Optimising SVG is very similar to minifying CSS or other text-based formats such as JavaScript or HTML. It is mainly about removing useless whitespace and redundant characters.

    The tool I recommend to reduce the size of SVG images is svgo. It is written for node.js. To install it, just do:

    $ npm install -g svgo

    In its basic form, you’ll use a command line like this:

    $ svgo --input img/graph.svg --output img/optimised-graph.svg

    Please make sure to specify an --output parameter if you want to keep the original image. Otherwise svgo will replace it with the optimised version.

    svgo will apply several changes to the original file—stripping out useless comments, tags, and attributes, reducing the precision of numbers in path definitions, or sorting attributes for better GZIP compression.

    This works with no surprises for simple images. However, in more complex cases, the image manipulation can result in a garbled file.

    svgo plugins

    svgo is very modular thanks to a plugin-based architecture.

    When optimising complex images, I’ve noticed that the main issues are caused by two svgo plugins:

    • convertPathData
    • mergePaths

    Deactivating these will ensure you get a correct result in most cases:

    $ svgo --disable=convertPathData --disable=mergePaths -i img/a.svg

    convertPathData will convert the path data using relative and shorthand notations. Unfortunately, some environments won’t fully recognise this syntax and you’ll get something like:

    Screenshot of Gnome Image Viewer displaying an original SVG image (left) and a version optimised via svgo on (right)

    Screenshot of Gnome Image Viewer displaying an original SVG image (left) and a version optimised via svgo on (right)



    Please note that the optimised image will display correctly in all browsers. So you may still want to use this plugin.

    The other plugin that can cause you trouble—mergePaths—will merge together shapes of the same style to reduce the number of <path> tags in the source. However, this might create issues if two paths overlap.

    Merge paths issue

    In the image on the right, please note the rendering differences around the character’s neck and hand, also note the Twitter logo. The outline view shows 3 overlapping paths that make up the character’s head.

    My suggestion is to first try svgo with all plugins activated, then if anything is wrong, deactivate the two mentioned above.

    If the result is still very different from your original image, then you’ll have to deactivate the plugins one by one to detect the one which causes the issue. Here is a list of svgo plugins.

    Optimising even further

    svgo is a great tool, but in some specific cases, you’ll want to compress your SVG images even further. To do so, you have to dig into the file format and do some manual optimisations.

    In these cases, my favourite tool is Inkscape: it is free, open source and available on most platforms.

    If you want to use the mergePaths plugin of svgo, you must combine overlapping paths yourself. Here’s how to do it:

    Open your image in Inkscape and identify the path with the same style (fill and stroke). Select them all (maintain shift pressed for multiple selection). Click on the Path menu and select Union. You’re done—all three paths have been merged into a single one.

    Merge paths technique

    The 3 different paths that create the character’s head are merged, as shown by the outline view on the right.



    Repeat this operation for all paths of the same style that are overlapping and then you’re ready to use svgo again, keeping the mergePaths plugin.

    There are all sorts of different optimisations you can apply manually:

    • Convert strokes to paths so they can be merged with paths of similar style.
    • Cut paths manually to avoid using clip-path.
    • Exclude an underneath path from a overlapping path and merge with a similar path to avoid layer issues. (In the image above, see the character’s hair—the side hair path is under his head, but the top hair is above it—so you can’t merge the 3 hair paths as is.)

    Final considerations

    These manual optimisations can take a lot of time for meagre results, so think twice before starting!

    A good rule of thumb when optimising SVG images is to make sure the final file has only one path per style (same fill and stroke style) and uses no <g> tags to group path to objects.

    In Firefox OS, we use an icon font, gaia-icons, generated from SVG glyphs. I noticed that optimising them resulted in a significantly lighter font file, with no visual differences.

    Whether you use SVG for embedding images on an app or to create a font file, always remember to optimise. It will make your users happier!

  8. This API is so Fetching!

    For more than a decade the Web has used XMLHttpRequest (XHR) to achieve asynchronous requests in JavaScript. While very useful, XHR is not a very nice API. It suffers from lack of separation of concerns. The input, output and state are all managed by interacting with one object, and state is tracked using events. Also, the event-based model doesn’t play well with JavaScript’s recent focus on Promise- and generator-based asynchronous programming.

    The Fetch API intends to fix most of these problems. It does this by introducing the same primitives to JS that are used in the HTTP protocol. In addition, it introduces a utility function fetch() that succinctly captures the intention of retrieving a resource from the network.

    The Fetch specification, which defines the API, nails down the semantics of a user agent fetching a resource. This, combined with ServiceWorkers, is an attempt to:

    1. Improve the offline experience.
    2. Expose the building blocks of the Web to the platform as part of the extensible web movement.

    As of this writing, the Fetch API is available in Firefox 39 (currently Nightly) and Chrome 42 (currently dev). Github has a Fetch polyfill.

    Feature detection

    Fetch API support can be detected by checking for Headers,Request, Response or fetch on the window or worker scope.

    Simple fetching

    The most useful, high-level part of the Fetch API is the fetch() function. In its simplest form it takes a URL and returns a promise that resolves to the response. The response is captured as a Response object.

    fetch("/data.json").then(function(res) {
      // res instanceof Response == true.
      if (res.ok) {
        res.json().then(function(data) {
          console.log(data.entries);
        });
      } else {
        console.log("Looks like the response wasn't perfect, got status", res.status);
      }
    }, function(e) {
      console.log("Fetch failed!", e);
    });

    Submitting some parameters, it would look like this:

    fetch("http://www.example.org/submit.php", {
      method: "POST",
      headers: {
        "Content-Type": "application/x-www-form-urlencoded"
      },
      body: "firstName=Nikhil&favColor=blue&password=easytoguess"
    }).then(function(res) {
      if (res.ok) {
        alert("Perfect! Your settings are saved.");
      } else if (res.status == 401) {
        alert("Oops! You are not authorized.");
      }
    }, function(e) {
      alert("Error submitting form!");
    });

    The fetch() function’s arguments are the same as those passed to the
    Request() constructor, so you may directly pass arbitrarily complex requests to fetch() as discussed below.

    Headers

    Fetch introduces 3 interfaces. These are Headers, Request and
    Response. They map directly to the underlying HTTP concepts, but have
    certain visibility filters in place for privacy and security reasons, such as
    supporting CORS rules and ensuring cookies aren’t readable by third parties.

    The Headers interface is a simple multi-map of names to values:

    var content = "Hello World";
    var reqHeaders = new Headers();
    reqHeaders.append("Content-Type", "text/plain"
    reqHeaders.append("Content-Length", content.length.toString());
    reqHeaders.append("X-Custom-Header", "ProcessThisImmediately");

    The same can be achieved by passing an array of arrays or a JS object literal
    to the constructor:

    reqHeaders = new Headers({
      "Content-Type": "text/plain",
      "Content-Length": content.length.toString(),
      "X-Custom-Header": "ProcessThisImmediately",
    });

    The contents can be queried and retrieved:

    console.log(reqHeaders.has("Content-Type")); // true
    console.log(reqHeaders.has("Set-Cookie")); // false
    reqHeaders.set("Content-Type", "text/html");
    reqHeaders.append("X-Custom-Header", "AnotherValue");
     
    console.log(reqHeaders.get("Content-Length")); // 11
    console.log(reqHeaders.getAll("X-Custom-Header")); // ["ProcessThisImmediately", "AnotherValue"]
     
    reqHeaders.delete("X-Custom-Header");
    console.log(reqHeaders.getAll("X-Custom-Header")); // []

    Some of these operations are only useful in ServiceWorkers, but they provide
    a much nicer API to Headers.

    Since Headers can be sent in requests, or received in responses, and have various limitations about what information can and should be mutable, Headers objects have a guard property. This is not exposed to the Web, but it affects which mutation operations are allowed on the Headers object.
    Possible values are:

    • “none”: default.
    • “request”: guard for a Headers object obtained from a Request (Request.headers).
    • “request-no-cors”: guard for a Headers object obtained from a Request created
      with mode “no-cors”.
    • “response”: naturally, for Headers obtained from Response (Response.headers).
    • “immutable”: Mostly used for ServiceWorkers, renders a Headers object
      read-only.

    The details of how each guard affects the behaviors of the Headers object are
    in the specification. For example, you may not append or set a “request” guarded Headers’ “Content-Length” header. Similarly, inserting “Set-Cookie” into a Response header is not allowed so that ServiceWorkers may not set cookies via synthesized Responses.

    All of the Headers methods throw TypeError if name is not a valid HTTP Header name. The mutation operations will throw TypeError if there is an immutable guard. Otherwise they fail silently. For example:

    var res = Response.error();
    try {
      res.headers.set("Origin", "http://mybank.com");
    } catch(e) {
      console.log("Cannot pretend to be a bank!");
    }

    Request

    The Request interface defines a request to fetch a resource over HTTP. URL, method and headers are expected, but the Request also allows specifying a body, a request mode, credentials and cache hints.

    The simplest Request is of course, just a URL, as you may do to GET a resource.

    var req = new Request("/index.html");
    console.log(req.method); // "GET"
    console.log(req.url); // "http://example.com/index.html"

    You may also pass a Request to the Request() constructor to create a copy.
    (This is not the same as calling the clone() method, which is covered in
    the “Reading bodies” section.).

    var copy = new Request(req);
    console.log(copy.method); // "GET"
    console.log(copy.url); // "http://example.com/index.html"

    Again, this form is probably only useful in ServiceWorkers.

    The non-URL attributes of the Request can only be set by passing initial
    values as a second argument to the constructor. This argument is a dictionary.

    var uploadReq = new Request("/uploadImage", {
      method: "POST",
      headers: {
        "Content-Type": "image/png",
      },
      body: "image data"
    });

    The Request’s mode is used to determine if cross-origin requests lead to valid responses, and which properties on the response are readable. Legal mode values are "same-origin", "no-cors" (default) and "cors".

    The "same-origin" mode is simple, if a request is made to another origin with this mode set, the result is simply an error. You could use this to ensure that
    a request is always being made to your origin.

    var arbitraryUrl = document.getElementById("url-input").value;
    fetch(arbitraryUrl, { mode: "same-origin" }).then(function(res) {
      console.log("Response succeeded?", res.ok);
    }, function(e) {
      console.log("Please enter a same-origin URL!");
    });

    The "no-cors" mode captures what the web platform does by default for scripts you import from CDNs, images hosted on other domains, and so on. First, it prevents the method from being anything other than “HEAD”, “GET” or “POST”. Second, if any ServiceWorkers intercept these requests, they may not add or override any headers except for these. Third, JavaScript may not access any properties of the resulting Response. This ensures that ServiceWorkers do not affect the semantics of the Web and prevents security and privacy issues that could arise from leaking data across domains.

    "cors" mode is what you’ll usually use to make known cross-origin requests to access various APIs offered by other vendors. These are expected to adhere to
    the CORS protocol. Only a limited set of headers is exposed in the Response, but the body is readable. For example, you could get a list of Flickr’s most interesting photos today like this:

    var u = new URLSearchParams();
    u.append('method', 'flickr.interestingness.getList');
    u.append('api_key', '<insert api key here>');
    u.append('format', 'json');
    u.append('nojsoncallback', '1');
     
    var apiCall = fetch('https://api.flickr.com/services/rest?' + u);
     
    apiCall.then(function(response) {
      return response.json().then(function(json) {
        // photo is a list of photos.
        return json.photos.photo;
      });
    }).then(function(photos) {
      photos.forEach(function(photo) {
        console.log(photo.title);
      });
    });

    You may not read out the “Date” header since Flickr does not allow it via
    Access-Control-Expose-Headers.

    response.headers.get("Date"); // null

    The credentials enumeration determines if cookies for the other domain are
    sent to cross-origin requests. This is similar to XHR’s withCredentials
    flag, but tri-valued as "omit" (default), "same-origin" and "include".

    The Request object will also give the ability to offer caching hints to the user-agent. This is currently undergoing some security review. Firefox exposes the attribute, but it has no effect.

    Requests have two read-only attributes that are relevant to ServiceWorkers
    intercepting them. There is the string referrer, which is set by the UA to be
    the referrer of the Request. This may be an empty string. The other is
    context which is a rather large enumeration defining what sort of resource is being fetched. This could be “image” if the request is from an tag in the controlled document, “worker” if it is an attempt to load a worker script, and so on. When used with the fetch() function, it is “fetch”.

    Response

    Response instances are returned by calls to fetch(). They can also be created by JS, but this is only useful in ServiceWorkers.

    We have already seen some attributes of Response when we looked at fetch(). The most obvious candidates are status, an integer (default value 200) and statusText (default value “OK”), which correspond to the HTTP status code and reason. The ok attribute is just a shorthand for checking that status is in the range 200-299 inclusive.

    headers is the Response’s Headers object, with guard “response”. The url attribute reflects the URL of the corresponding request.

    Response also has a type, which is “basic”, “cors”, “default”, “error” or
    “opaque”.

    • "basic": normal, same origin response, with all headers exposed except
      “Set-Cookie” and “Set-Cookie2″.
    • "cors": response was received from a valid cross-origin request. Certain headers and the body may be accessed.
    • "error": network error. No useful information describing the error is available. The Response’s status is 0, headers are empty and immutable. This is the type for a Response obtained from Response.error().
    • "opaque": response for “no-cors” request to cross-origin resource. Severely
      restricted

    The “error” type results in the fetch() Promise rejecting with TypeError.

    There are certain attributes that are useful only in a ServiceWorker scope. The
    idiomatic way to return a Response to an intercepted request in ServiceWorkers is:

    addEventListener('fetch', function(event) {
      event.respondWith(new Response("Response body", {
        headers: { "Content-Type" : "text/plain" }
      });
    });

    As you can see, Response has a two argument constructor, where both arguments are optional. The first argument is a body initializer, and the second is a dictionary to set the status, statusText and headers.

    The static method Response.error() simply returns an error response. Similarly, Response.redirect(url, status) returns a Response resulting in
    a redirect to url.

    Dealing with bodies

    Both Requests and Responses may contain body data. We’ve been glossing over it because of the various data types body may contain, but we will cover it in detail now.

    A body is an instance of any of the following types.

    In addition, Request and Response both offer the following methods to extract their body. These all return a Promise that is eventually resolved with the actual content.

    • arrayBuffer()
    • blob()
    • json()
    • text()
    • formData()

    This is a significant improvement over XHR in terms of ease of use of non-text data!

    Request bodies can be set by passing body parameters:

    var form = new FormData(document.getElementById('login-form'));
    fetch("/login", {
      method: "POST",
      body: form
    })

    Responses take the first argument as the body.

    var res = new Response(new File(["chunk", "chunk"], "archive.zip",
                           { type: "application/zip" }));

    Both Request and Response (and by extension the fetch() function), will try to intelligently determine the content type. Request will also automatically set a “Content-Type” header if none is set in the dictionary.

    Streams and cloning

    It is important to realise that Request and Response bodies can only be read once! Both interfaces have a boolean attribute bodyUsed to determine if it is safe to read or not.

    var res = new Response("one time use");
    console.log(res.bodyUsed); // false
    res.text().then(function(v) {
      console.log(res.bodyUsed); // true
    });
    console.log(res.bodyUsed); // true
     
    res.text().catch(function(e) {
      console.log("Tried to read already consumed Response");
    });

    This decision allows easing the transition to an eventual stream-based Fetch API. The intention is to let applications consume data as it arrives, allowing for JavaScript to deal with larger files like videos, and perform things like compression and editing on the fly.

    Often, you’ll want access to the body multiple times. For example, you can use the upcoming Cache API to store Requests and Responses for offline use, and Cache requires bodies to be available for reading.

    So how do you read out the body multiple times within such constraints? The API provides a clone() method on the two interfaces. This will return a clone of the object, with a ‘new’ body. clone() MUST be called before the body of the corresponding object has been used. That is, clone() first, read later.

    addEventListener('fetch', function(evt) {
      var sheep = new Response("Dolly");
      console.log(sheep.bodyUsed); // false
      var clone = sheep.clone();
      console.log(clone.bodyUsed); // false
     
      clone.text();
      console.log(sheep.bodyUsed); // false
      console.log(clone.bodyUsed); // true
     
      evt.respondWith(cache.add(sheep.clone()).then(function(e) {
        return sheep;
      });
    });

    Future improvements

    Along with the transition to streams, Fetch will eventually have the ability to abort running fetch()es and some way to report the progress of a fetch. These are provided by XHR, but are a little tricky to fit in the Promise-based nature of the Fetch API.

    You can contribute to the evolution of this API by participating in discussions on the WHATWG mailing list and in the issues in the Fetch and ServiceWorker specifications.

    For a better web!

    The author would like to thank Andrea Marchesini, Anne van Kesteren and Ben
    Kelly for helping with the specification and implementation.

  9. Ruby support in Firefox Developer Edition 38

    It was a long-time request from East Asian users, especially Japanese users, to have ruby support in the browser.

    Formerly, because of the lack of native ruby support in Firefox, users had to install add-ons like HTML Ruby to make ruby work. However, in Firefox Developer Edition 38, CSS Ruby has been enabled by default, which also brings the support of HTML5 ruby tags.

    Introduction

    What is ruby? In short, ruby is an extra text, which is usually small, attached to the main text for indicating the pronunciation or meaning of the corresponding characters. This kind of annotation is widely used in Japanese publications. It is also common in Chinese for books for children, educational publications, and dictionaries.

    Ruby Annotation

    Basic Usage

    Basically, the ruby support consists of four main tags: <ruby>, <rb>, <rt>, and <rp>. <ruby> is the tag that wraps the whole ruby structure, <rb> is used to mark the text in the normal line, <rt> is for the annotation, and <rp> is a tag which is hidden by default. With the four tags, the result above can be achieved from the following code:

    <ruby>
      <rb>とある<rb>科学<rb><rb>超電磁砲</rb>
      <rp></rp><rt>とある<rt>かがく<rt><rt>レールガン</rt><rp></rp>
    </ruby>

    Since <rb> and <rt> can be auto-closed by themselves, we don’t bother to add code to close those tags manually.

    As shown in the image, the duplicate parts as well as <rp>s are hidden automatically. But why should we add content which is hidden by default?

    The answer is: it is more natural in semantics, and it helps conversion to the inline form, which is a more general form accepted by more software. For example, this allows the page to have a decent effect on a browser with no ruby support. It also enables the user agent to generate well-formed plain text when you want to copy text with ruby (though this feature hasn’t yet been landed on Firefox).

    In addition, the extra content makes it possible to provide inline style of the annotation without changing the document. You would only need to add a rule to your stylesheet:

    ruby, rb, rt, rp {
      display: inline;
      font-size: inherit;
    }

    Actually, if you don’t have those requirements, only <ruby> and <rt> are necessary. For simplest cases, e.g. a single ideographic character, you can use code like:

    <ruby><rt>Saki</rt></ruby>

    Advanced Support

    Aside from the basic usage of ruby, Firefox now provides support for more advanced cases.

    By default, if the width of an annotation does not match its base text, the shorter text will be justified as shown in the example above. However, this behavior can be controlled via the ruby-align property. Aside from the default value (space-around), it can also make the content align to both sides (space-between), centered (center), or aligned to the start side (start).

    Multiple levels of annotations are also supported via tag <rtc> which is the container of <rt>s. Every <rtc> represents one level of annotation, but if you leave out the <rtc>, the browser will do some cleanup to wrap consecutive <rt>s in a single anonymous <rtc>, forming one level.

    For example, we can extend the example above to:

    <ruby>
      <rb>とある<rb>科学<rb><rb>超電磁砲</rb>
      <rp></rp><rt>とある<rt>かがく<rt><rt>レールガン</rt><rp></rp>
      <rtc><rt>Toaru<rt>Kagaku<rt>no<rt>Rērugan</rt></rtc><rp></rp>
    </ruby>

    If you do not put any <rt> inside a <rtc>, this annotation will become a span across the whole base:

    <ruby>
      <rb>とある<rb>科学<rb><rb>超電磁砲</rb>
      <rp></rp><rt>とある<rt>かがく<rt><rt>レールガン</rt><rp></rp>
      <rtc lang="en">A Certain Scientific Railgun</rtc><rp></rp>
    </ruby>

    You can use ruby-position to place the given level of annotation on the side you want. For the examples above, if you want to put the second level under the main line, you can apply ruby-position: under; to the <rtc> tag. Currently, only under and the default value over is supported.

    (Note: The CSS Working Group is considering a change to the default value of ruby-position, so that annotations become double-sided by default. This change is likely to happen in a future version of Firefox.)

    In the end, an advanced example of ruby combining everything introduced above:

    Advanced Example for Ruby

    rtc:lang(en) {
      ruby-position: under;
      ruby-align: center;
      font-size: 75%;
    }
    <ruby>
      <rb>とある<rb>科学<rb><rb>超電磁砲</rb>
      <rp></rp><rt>とある<rt>かがく<rt><rt>レールガン</rt><rp></rp>
      <rtc lang="en">A Certain Scientific Railgun</rtc><rp></rp>
    </ruby>

    Got questions, comments, feedback on the implementation? Don’t hesitate to share your thoughts or issues here or via bugzilla.

  10. Announcing the MDN Fellowship Program

    For nearly a decade, the Mozilla Developer Network (MDN) has been a vital source of technical information for millions of web and mobile developers. And while each month hundreds of developers actively contribute to MDN, we know there are many more with deep expertise in the Web who aren’t participating—yet. Certainly MDN and the Web would benefit from their knowledge and skill, and we’re piloting a program to provide benefits for them as well.

    Mozillians at Summit 2013.

    Mozillians at Summit 2013. Photo by Tristan Nitot.


    What it is

    The MDN Fellowship pilot is a seven-week part-time (5-10 hours per week) education and leadership program pairing advanced web and mobile developers with engineering and educational experts at Mozilla to work on significant, influential web projects. Fellows will contribute by developing apps, API descriptions, and curriculum on MDN for their project. They will also receive hands-on coaching from project mentors, as well as others at Mozilla who will provide training and guidance on curriculum development, so that their work can be taught to others.

    An overview of projects

    Here’s a look at some of the technical topics and the associated project tasks we’ve identified for our first group of MDN fellows.

    • ServiceWorkers essentially act as proxy servers that sit between web applications, the browser, and (when available) the network. They are key to the success of web apps, enabling the creation of effective offline experiences and allow access to push notifications and background sync APIs.
      What you’ll work on: You’ll write a demonstration web app (new or existing) to showcase Service Worker functionality and provide detailed API descriptions.
    • WebGL is the latest incarnation of the OpenGL family of real-time rendering immediate mode graphics APIs. This year WebGL is getting some cool new features with the publication of standardization efforts around the WebGL 2.0 spec.
      What you’ll do: Develop a curriculum on MDN for teaching the WebGL APIs developers new to graphics programming.
    • Web app performance. App performance is impacted by many factors, including serving content, rendering, and interactivity. Finding and addressing performance bottlenecks depends on tooling the browser networking and rendering but also (and often more importantly) user perception.
      What you’ll do. Develop a curriculum on MDN for teaching developers to master performance tooling and to develop web apps with performance as a feature.
    • TestTheWebForward. Mozilla participates in an important W3C initiative, TestTheWebForward, a community-driven effort for open web platform testing.
      What you’ll do: Review various existing technical specifications to identify gaps between the documentation and current situation. Refine existing tests to adapt to this cross-browser test harness.
    • MDN curriculum development. MDN serves as a trusted resource for millions of developers. In 2015, MDN will expand the scope of this content by developing Content Kits: key learning materials including code samples, video screencasts and demos, and more.

      What you’ll do: Act as lead curator for technical curriculum addressing a key web technology, developing code samples, videos, interactive exercises and other essential educational components. You may propose your own subject area (examples: virtual reality on the web, network security, CSS, etc.) or we will work with you to match you based on your subject area knowledge and Mozilla priorities.

    “I’m looking forward to working with a Fellow to make low-level real-time graphics more approachable,” says WebGL project mentor Nick Desaulniers. More information, including project mentors and the type of skills & experience required for each project, can be found on the MDN Fellowship page.

    MDN writers

    Photo by Tristan Nitot.

    How to apply

    If any of these projects sounds interesting, check out the website and apply by April 1. We’ll announce the fellows in May and start the program in June by bringing fellows and mentors together to get everyone familiarized with their projects and one another. Then, for 6 weeks, you’ll work directly with your mentor on your project from your home base. You’ll also receive ongoing feedback and coaching to set you up for success on teaching what you’re building to larger groups of developers.

    Here’s a timeline:

    • Now – April 1: Apply!
    • April: Finalist candidates will be interviewed.
    • May: Fellows are announced.
    • June (dates & location TBA): Orientation at a Mozilla space.
    • June 29 – August 3: Work on your projects + regular team calls to receive coaching on your work.
    • August 11-12: Graduation

    This program is for you if…

    You code, document, and ship with confidence, and now you’d like to start sharing your skills with a broader community. Maybe you want a way out of the walled garden and you’d like to contribute to Mozilla’s mission of keeping the web open for all. Perhaps you want to be effective at sharing your expertise with a wider audience.

    Consider this program if you want an opportunity to:

    • Amplify the impact of your technical expertise by contributing to influential, significant projects at Mozilla.
    • Stretch your technical expertise by working closely with Mozilla technical mentors.
    • Integrate educational best practices into your work with Mozilla teaching mentors.

    Check out the MDN Fellowship website, apply by April 1, and encourage others as well!

    Mozillians at Mozcamp Warsaw. Photo by Tristan Nitot