Standards Articles

Sort by:


  1. Making and Breaking the Web With CSS Gradients

    What is CSS prefixing and why do I care?

    Straight from the source:

    “Browser vendors sometimes add prefixes to experimental or nonstandard CSS properties, so developers can experiment but changes in browser behavior don’t break the code during the standards process. Developers should wait to include the unprefixed property until browser behavior is standardized.”

    As a Web developer, users of your web sites will be affected if you use prefixed CSS properties which later have their prefixes removed—especially if syntax has changed between prefixed and unprefixed variants.

    There are steps you can take in stewardship of an unbroken Web. Begin by checking your stylesheets for outdated gradient syntax and updating with an unprefixed modern equivalent. But first, let’s take a closer look at the issue.

    What are CSS gradients?

    CSS gradients are a type of CSS <image> function (expressed as a property value) that enable developers to style the background of block-level elements to have variations in color instead of just a solid color. The MDN documentation on gradients gives an overview of the various gradient types and how to use them. As always, CSS Tricks has top notch coverage on CSS3 gradients as well.

    Screenshot of's CSS with a CSS gradient

    Removing (and then not removing) prefixed gradients from Firefox

    In Bug 1176496, we tried to remove support for the old -moz- prefixed linear and radial gradients. Unfortunately, we soon realized that it broke the Web for enough sites ([1], [2], [3], [4], [5], [6]) that we had to add back support (for now).

    Sin and syntax

    Due to changes in the spec between the -moz- prefixed implementation and the modern, prefix-less version, it’s not possible to just remove prefixes and get working gradients.

    Here’s a simple example of how the syntax has changed (for linear-gradient):

    /* The old syntax, deprecated and prefixed, for old browsers */
    background: -prefix-linear-gradient(top, blue, white); 
    /* The new syntax needed by standard-compliant browsers (Opera 12.1,
       IE 10, Firefox 16, Chrome 26, Safari 6.1), without prefix */
    background: linear-gradient(to bottom, blue, white);

    In a nutshell, to and at keywords were added, contain and cover keywords were removed, and the angle coordinate system was changed to be more consistent with other parts of the platform.

    When IE10 came out with support for prefixless new gradients, IEBlog wrote an awesome post illustrating the differences between the prefixed (old) syntax and the new syntax; check that out for more in-depth coverage. The article on CSS3 gradients also has a good overview on the history of CSS gradients and its syntaxes (see “Tweener” and “New” in the “Browser Support/Prefixes” section).

    OK, so like, what should I do?

    You can start checking your stylesheets for outdated gradient syntax and making sure to have an unprefixed modern equivalent.

    Here are some tools and libraries that can help you maintain modern, up-to-date, prefixless CSS:

    If you’re already using the PostCSS plugin Autoprefixer, you won’t have to do anything. If you’re not using it yet, consider adding it to your tool belt. And if you prefer a client-side solution, Lea Verou’s prefix-free.js is another great option.

    In addition, the web app Colorzilla will allow you to enter your old CSS gradient syntax to get a quick conversion to the modern prefixless conversion.

    Masatoshi Kimura has added a preference that can be used to turn off support for the old -moz- prefixed gradients, giving developers an easy way to visually test for broken gradients. Set layout.css.prefixes.gradients to false (from about:config) in Nightly. This pref should ship in Firefox 42.

    Modernizing your CSS

    And as long as you’re in the middle of editing your stylesheets, now would be a good time to check the rest of them for overall freshness. Flexbox is an area that is particularly troublesome and in need of unbreaking, but good resources exist to ease the pain. CSS border-image is also an area that had changes between prefixed and unprefixed versions.

    Thanks for your help in building and maintaining a Web that works.

  2. The state of Web Components

    Web Components have been on developers’ radars for quite some time now. They were first introduced by Alex Russell at Fronteers Conference 2011. The concept shook the community up and became the topic of many future talks and discussions.

    In 2013 a Web Components-based framework called Polymer was released by Google to kick the tires of these new APIs, get community feedback and add some sugar and opinion.

    By now, 4 years on, Web Components should be everywhere, but in reality Chrome is the only browser with ‘some version’ of Web Components. Even with polyfills it’s clear Web Components won’t be fully embraced by the community until the majority of browsers are on-board.

    Why has this taken so long?

    To cut a long story short, vendors couldn’t agree.

    Web Components were a Google effort and little negotiation was made with other browsers before shipping. Like most negotiations in life, parties that don’t feel involved lack enthusiasm and tend not to agree.

    Web Components were an ambitious proposal. Initial APIs were high-level and complex to implement (albeit for good reasons), which only added to contention and disagreement between vendors.

    Google pushed forward, they sought feedback, gained community buy-in; but in hindsight, before other vendors shipped, usability was blocked.

    Polyfills meant theoretically Web Components could work on browsers that hadn’t yet implemented, but these have never been accepted as ‘suitable for production’.

    Aside from all this, Microsoft haven’t been in a position to add many new DOM APIs due to the Edge work (nearing completion). And Apple, have been focusing on alternative features for Safari.

    Custom Elements

    Of all the Web Components technologies, Custom Elements have been the least contentious. There is general agreement on the value of being able to define how a piece of UI looks and behaves and being able to distribute that piece cross-browser and cross-framework.


    The term ‘upgrade’ refers to when an element transforms from a plain old HTMLElement into a shiny custom element with its defined life-cycle and prototype. Today, when elements are upgraded, their createdCallback is called.

    var proto = Object.create(HTMLElement.prototype);
    proto.createdCallback = function() { ... };
    document.registerElement('x-foo', { prototype: proto });

    There are five proposals so far from multiple vendors; two stand out as holding the most promise.


    An evolved version of the createdCallback pattern that works well with ES6 classes. The createdCallback concept lives on, but sub-classing is more conventional.

    class MyEl extends HTMLElement {
      createdCallback() { ... }
    document.registerElement("my-el", MyEl);

    Like in today’s implementation, the custom element begins life as HTMLUnknownElement then some time later the prototype is swapped (or ‘swizzled’) with the registered prototype and the createdCallback is called.

    The downside of this approach is that it’s different from how the platform itself behaves. Elements are ‘unknown’ at first, then transform into their final form at some point in the future, which can lead to developer confusion.

    Synchronous constructor

    The constructor registered by the developer is invoked by the parser at the point the custom element is created and inserted into the tree.

    class MyEl extends HTMLElement {
      constructor() { ... }
    document.registerElement("my-el", MyEl);

    Although this seems sensible, it means that any custom elements in the initial downloaded document will fail to upgrade if the scripts that contain their registerElement definition are loaded asynchronously. This is not helpful heading into a world of asynchronous ES6 modules.

    Additionally synchronous constructors come with platform issues related to .cloneNode().

    A direction is expected to be decided by vendors at a face-to-face meeting in July 2015.


    The is attribute gives developers the ability to layer the behaviour of a custom element on top of a standard built-in element.

    <input type="text" is="my-text-input">

    Arguments for

    1. Allows extending the built-in features of a element that aren’t exposed as primitives (eg. accessibility characteristics, <form> controls, <template>).
    2. They give means to ‘progressively enhance’ an element, so that it remains functional without JavaScript.

    Arguments against

    1. Syntax is confusing.
    2. It side-steps the underlying problem that we’re missing many key accessibility primitives in the platform.
    3. It side-steps the underlying problem that we don’t have a way to properly extend built-in elements.
    4. Use-cases are limited; as soon as developers introduce Shadow DOM, they lose all built-in accessibility features.


    It is generally agreed that is is a ‘wart’ on the Custom Elements spec. Google has already implemented is and sees it as a stop-gap until lower-level primitives are exposed. Right now Mozilla and Apple would rather ship a Custom Elements V1 sooner and address this problem properly in a V2 without polluting the platform with ‘warts’.

    HTML as Custom Elements is a project by Domenic Denicola that attempts to rebuild built-in HTML elements with custom elements in an attempt to uncover DOM primitives the platform is missing.

    Shadow DOM

    Shadow DOM yielded the most contention by far between vendors. So much so that features had to be split into a ‘V1′ and ‘V2′ agenda to help reach agreement quicker.


    Distribution is the phase whereby children of a shadow host get visually ‘projected’ into slots inside the host’s Shadow DOM. This is the feature that enables your component to make use of content the user nests inside it.

    Current API

    The current API is fully declarative. Within the Shadow DOM you can use special <content> elements to define where you want the host’s children to be visually inserted.

    <content select="header"></content>

    Both Apple and Microsoft pushed back on this approach due to concerns around complexity and performance.

    A new Imperative API

    Even at the face-to-face meeting, agreement couldn’t be made on a declarative API, so all vendors agreed to pursue an imperative solution.

    All four vendors (Microsoft, Google, Apple and Mozilla) were tasked with specifying this new API before a July 2015 deadline. So far there have been three suggestions. The simplest of the three looks something like:

    var shadow = host.createShadowRoot({
      distribute: function(nodes) {
        var slot = shadow.querySelector('content');
        for (var i = 0; i < nodes.length; i++) {
    shadow.innerHTML = '<content></content>';
    // Call initially ...
    // then hook up to MutationObserver

    The main obstacle is: timing. If the children of the host node change and we redistribute when the MutationObserver callback fires, asking for a layout property will return an incorrect result.

    someElement.offsetTop; //=> old value
    // distribute on mutation observer callback (async)
    someElement.offsetTop; //=> new value

    Calling offsetTop will perform a synchronous layout before distribution!

    This might not seems like the end of the world, but scripts and browser internals often depend on the value of offsetTop being correct to perform many different operations, such as: scrolling elements into view.

    If these problems can’t be solved we may see a retreat back to discussions over a declarative API. This will either be in the form of the current <content select> style, or the newly proposed ‘named slots’ API (from Apple).

    A new Declarative API – ‘Named Slots’

    The ‘named slots’ proposal is a simpler variation of the current ‘content select’ API, whereby the component user must explicitly label their content with the slot they wish it to be distributed to.

    Shadow Root of <x-page>:

    <slot name="header"></slot>
    <slot name="footer"></slot>
    <div>some shadow content</div>

    Usage of <x-page>:

      <header slot="header">header</header>
      <footer slot="footer">footer</footer>
      <h1>my page title</h1>
      <p>my page content<p>

    Composed/rendered tree (what the user sees):

      <header slot="header">header</header>
      <h1>my page title</h1>
      <p>my page content<p>
      <footer slot="footer">footer</footer>
      <div>some shadow content</div>

    The browser has looked at the direct children of the shadow host (myXPage.children) and seen if any of them have a slot attribute that matches the name of a <slot> element in the host’s shadowRoot.

    When a match is found, the node is visually ‘distributed’ in place of the corresponding <slot> element. Any children left undistributed at the end of this matching process are distributed to a default (unamed) <slot> element (if one exists).

    1. Distribution is more explicit, easier to understand, less ‘magic’.
    2. Distribution is simpler for the engine to compute.
    1. Doesn’t explain how built-in elements, like <select>, work.
    2. Decorating content with slot attributes is more work for the user.
    3. Less expressive.

    ‘closed’ vs. ‘open’

    When a shadowRoot is ‘closed’ the it cannot be accessed via myHost.shadowRoot. This gives a component author some assurance that users won’t poke into implementation details, similar to how you can use closures to keep things private.

    Apple felt strongly that this was an important feature that they would block on. They argued that implementation details should never be exposed to the outside world and that ‘closed’ mode would be a required feature when ‘isolated’ custom elements became a thing.

    Google on the other hand felt that ‘closed’ shadow roots would prevent some accessibility and component tooling use-cases. They argued that it’s impossible to accidentally stumble into a shadowRoot and that if people want to they likely have a good reason. JS/DOM is open, let’s keep it that way.

    At the April meeting it became clear that to move forward, ‘mode’ needed to be a feature, but vendors were struggling to reach agreement on whether this should default to ‘open’ or ‘closed’. As a result, all agreed that for V1 ‘mode’ would be a required parameter, and thus wouldn’t need a specified default.

    element.createShadowRoot({ mode: 'open' });
    element.createShadowRoot({ mode: 'closed' });

    Shadow piercing combinators

    A ‘piercing combinator’ is a special CSS ‘combinator’ that can target elements inside a shadow root from the outside world. An example is /deep/ later renamed to >>>:

    .foo >>> div { color: red }

    When Web Components were first specified it was thought that these were required, but after looking at how they were being used it seemed to only bring problems, making it too easy to break the style boundaries that make Web Components so appealing.


    Style calculation can be incredibly fast inside a tightly scoped Shadow DOM if the engine doesn’t have to take into consideration any outside selectors or state. The very presence of piercing combinators forbids these kind of optimisations.


    Dropping shadow piercing combinators doesn’t mean that users will never be able to customize the appearance of a component from the outside.

    CSS custom-properties (variables)

    In Firefox OS we’re using CSS Custom Properties to expose specific style properties that can be defined (or overridden) from the outside.

    External (user):

    x-foo { --x-foo-border-radius: 10px; }

    Internal (author):

    .internal-part { border-radius: var(--x-foo-border-radius, 0); }
    Custom pseudo-elements

    We have also seen interest expressed from several vendors in reintroducing the ability to define custom pseudo selectors that would expose given internal parts to be styled (similar to how we style parts of <input type=”range”> today).

    x-foo::my-internal-part { ... }

    This will likely be considered for a Shadow DOM V2 specification.

    Mixins – @extend

    There is proposed specification to bring SASS’s @extend behaviour to CSS. This would be a useful tool for component authors to allow users to provide a ‘bag’ of properties to apply to a specific internal part.

    External (user):

    .x-foo-part {
      background-color: red;
      border-radius: 4px;

    Internal (author):

    .internal-part {
      @extend .x-foo-part;

    Multiple shadow roots

    Why would I want more than one shadow root on the same element?, I hear you ask. The answer is: inheritance.

    Let’s imagine I’m writing an <x-dialog> component. Within this component I write all the markup, styling, and interactions to give me an opening and closing dialog window.

      <h1>My title</h1>
      <p>Some details</p>

    The shadow root pulls any user provided content into div.inner via the <content> insertion point.

    <div class="outer">
      <div class="inner">

    I also want to create <x-dialog-alert> that looks and behaves just like <x-dialog> but with a more restricted API, a bit like alert('foo').

    var proto = Object.create(XDialog.prototype);
    proto.createdCallback = function() {;
      this.shadowRoot.innerHTML = templateString;
    document.registerElement('x-dialog-alert', { prototype: proto });

    The new component will have its own shadow root, but it’s designed to work on top of the parent class’s shadow root. The <shadow> represents the ‘older’ shadow root and allows us to project content inside it.


    Once you get your head round multiple shadow roots, they become a powerful concept. The downside is they bring a lot of complexity and introduce a lot of edge cases.

    Inheritance without multiple shadows

    Inheritance is still possible without multiple shadow roots, but it involves manually mutating the super class’s shadow root.

    var proto = Object.create(XDialog.prototype);
    proto.createdCallback = function() {;
      var inner = this.shadowRoot.querySelector('.inner');
      var h1 = document.createElement('h1');
      h1.textContent = 'Alert';
      inner.insertBefore(h1, inner.children[0]);
      var button = document.createElement('button');
      button.textContent = 'OK';
    document.registerElement('x-dialog-alert', { prototype: proto });

    The downsides of this approach are:

    1. Not as elegant.
    2. Your sub-component is dependent on the implementation details of the super-component.
    3. This wouldn’t be possible if the super component’s shadow root was ‘closed’, as this.shadowRoot would be undefined.

    HTML Imports

    HTML Imports provide a way to import all assets defined in one .html document, into the scope of another.

    <link rel="import" href="/path/to/imports/stuff.html">

    As previously stated, Mozilla is not currently intending to implementing HTML Imports. This is in part because we’d like to see how ES6 modules pan out before shipping another way of importing external assets, and partly because we don’t feel they enable much that isn’t already possible.

    We’ve been working with Web Components in Firefox OS for over a year and have found using existing module syntax (AMD or Common JS) to resolve a dependency tree, registering elements, loaded using a normal <script> tag seems to be enough to get stuff done.

    HTML Imports do lend themselves well to a simpler/more declarative workflow, such as the older <element> and Polymer’s current registration syntax.

    With this simplicity has come criticism from the community that Imports don’t offer enough control to be taken seriously as a dependency management solution.

    Before the decision was made a few months ago, Mozilla had a working implementation behind a flag, but struggled through an incomplete specification.

    What will happen to them?

    Apple’s Isolated Custom Elements proposal makes use of an HTML Imports style approach to provide custom elements with their own document scope;: Perhaps there’s a future there.

    At Mozilla we want to explore how importing custom element definitions can align with upcoming ES6 module APIs. We’d be prepared to implement if/when they appear to enable developers to do stuff they can’t already do.

    To conclude

    Web Components are a prime example of how difficult it is to get large features into the browser today. Every API added lives indefinitely and remains as an obstacle to the next.

    Comparable to picking apart a huge knotted ball of string, adding a bit more, then tangling it back up again. This knot, our platform, grows ever larger and more complex.

    Web Components have been in planning for over three years, but we’re optimistic the end is near. All major vendors are on board, enthusiastic, and investing significant time to help resolve the remaining issues.

    Let’s get ready to componentize the web!


  3. Introducing @counter-style


    The characters that indicate items in a list are called counters — they can be bullets or numbers. They are defined using the list-style-type CSS property. CSS1 introduced a list of predefined styles to be used as counter markers. The initial list was then slightly extended with addition of more predefined counter styles in CSS2.1. Even with 14 predefined counter styles, it still failed to address use cases from around the world.

    Now, the CSS Counter Styles Level 3 specification, which reached Candidate Recommendation last week, adds new predefined counter styles to the existing list that should address most common counter use cases.

    In addition to the new predefined styles, the spec also offers an open-ended solution for the needs of worldwide typography by introducing the @counter-style at-rule — which lets us define custom counter styles or even extend existing ones — and the symbols() function. The latter is a shorthand for defining inline styles and is useful when the fine-grained control that @counter-style offers is not needed.

    This article provides a guide to using the new counter features of CSS Level 3.


    A @counter-style at-rule is identified by a name and defined with a set of descriptors. It takes a numerical counter value and converts it into a string or image representation. A custom counter style definition looks like this:

    @counter-style blacknwhite {
      system: cyclic;
      negative: "(" ")";
      prefix: "/";
      symbols: ◆ ◇;
      suffix: "/ ";
      range: 2 4;
      speak-as: "bullets";

    The different available descriptors have the following meanings:

    1. system – the system descriptor lets the author specify an algorithm to convert the numerical counter value to a basic string representation. For example, the cyclic system cycles repeatedly through the list of symbols provided to create counter representations.
    2. negative – the negative descriptor provides the ability to specify additional symbols, such as a negative sign, to be appended/prepended to a counter representation if the counter value is negative. If only one symbol is specified, it will be added in front of the marker representation. A second value, if specified (as in the example below), will be added after the marker.
      @counter-style neg {
        system: numeric;
        symbols: "0" "1" "2" "3" "4" "5" "6" "7" "8" "9";
        negative: "(" ")";

      The above counter style definition will wrap negative markers in a pair of parentheses, instead of preceding them with the minus sign.

    3. prefix – specifies a symbol that will be prepended to the final counter representation.
    4. suffix – specifies a symbol that will be appended to the final counter representation.The default value of the suffix descriptor is a full stop followed by a space. If you need to replace the full stop “.” with a parenthesis, you can specify the suffix descriptor:
      @counter-style decimal-parenthesis {
        system: numeric;
        symbols: "0" "1" "2" "3" "4" "5" "6" "7" "8" "9";
        suffix: ") ";
    5. rangerange lets the author define a range of values over which a counter style is applicable. If a counter value falls outside the specified range, then the specified or default fallback style is used to represent that particular counter value, for example:
      @counter-style range-example {
        system: cyclic;
        symbols: A B C D;
        range:  4 8;

      The above rule will apply the style only for counter values starting from 4 to 8. The fallback style, decimal, will be used for the rest of the markers.

    6. pad – as the name suggests, the pad descriptor lets the author specify a padding length when the counter representations need to be of a minimum length. For example, if you want the counters to start at 01 and go through 02, 03, 04, etc., then the following pad descriptor should be used:
      @counter-style decimal-new {
        system: numeric;
        symbols: "0" "1" "2" "3" "4" "5" "6" "7" "8" "9";
        pad: 2 "0";

      (For a better way to achieve the same result, but without specifying the counter symbols, see the section below about extending existing systems.)

    7. fallback – the fallback specifies a style to fall back to if the specified system is unable to construct a counter representation or if a particular counter value falls outside the specified range.
    8. symbols and additive-symbols – specify the symbols that will be used to construct the counter representation. The symbols descriptor is used in most cases other than when the system specified is ‘additive.’ While symbols specifies individual symbols, the additive-symbols descriptor is composed of what are known as additive tuples – , which each consist of a symbol and a non-negative weight.Symbols specified can be strings, images in the form url(image-name) or identifiers. The below example uses images as symbols.
      @counter-style winners-list {
        system: fixed;
        symbols: url(gold-medal.svg) url(silver-medal.svg) url(bronze-medal.svg);
        suffix: " ";
    9. speak-as – specifies how a counter value should be spoken by speech synthesizers such as screen readers. For example, the author can specify a marker symbol to be read out as a word, a number, or as an audio cue for a bullet point. The example below reads out the counter values as numbers.
      @counter-style circled-digits {
        system: fixed;
        symbols: ➀ ➁ ➂;
        suffix: ' ';
        speak-as: numbers;

      You can then use a named style with list-style-type or with the counters() function as you normally would use a predefined counter style.

      ul {
        list-style-type: circled-digits;

      On a supported browser, the counter style above will render lists with markers like this:

      @counter-style demo image

    The system descriptor

    The system descriptor takes one of the predefined algorithms, which describe how to convert a numerical counter value to its representation. The system descriptor can have values that are cyclic, numeric, alphabetic, symbolic, additive, fixed and extends.

    If the system specified is cyclic, it will cycle through the list of symbols provided. Once the end of the list is reached, it will loop back to the beginning and start over. The fixed system is similar, but once the list of symbols is exhausted, it will resort to the fallback style to represent the rest of the markers. The symbolic, numeric and alphabetic systems are similar to the above two, but with their own subtle differences.

    The additive system requires that the additive-symbols descriptor be specified instead of the symbols descriptor. additive-symbols takes a list of additive tuples. Each additive tuple consists of a symbol and its numerical weight. The additive system is used to represent “sign-value” numbering systems such as the Roman numerals.

    We can rewrite the Roman numerals as a @counter-style rule using the additive system like this:

    @counter-style custom-roman {
      system: additive;
      range: 1 100;
      additive-symbols: 100 C, 90 XC, 50 L, 40 XL, 10 X, 9 IX, 5 V, 4 IV, 1 I;

    Extending existing or custom counter styles

    The extends system lets authors create custom counter styles based on existing ones. Authors can use the algorithm of another counter style, but alter its other aspects. If a @counter-style rule using the extends system has any unspecified descriptors, their values will be taken from the extended counter style specified.

    For example, if you want to define a new style which is similar to decimal, but has a string “Chapter ” in front of all the marker values, then rather than creating an entirely new style you can use the extends system to extend the  decimal style like this:

    @counter-style chapters {
      system: extends decimal;
      prefix: 'Chapter ';

    Or, if you need the decimal style to start numbers from 01, 02, 03.. instead of 1, 2, 3.., you can simply specify the pad descriptor:

    @counter-style decimal-new {
      system: extends decimal;
      pad: 2 "0";

    A counter style using the extends system should not specify the symbols and additive-symbols descriptors.

    See the reference page on MDN for details and usage examples of all system values.

    A shorthand – The symbols() function

    While a @counter-style at-rule lets you tailor a custom counter style, often such elaborate control is not needed. That is where the symbols() function comes in. symbols() lets you define inline counter styles as function properties. Since the counter styles defined in this way are nameless (anonymous), they can not be reused elsewhere in the stylesheet.

    An example anonymous counter style looks like this:

    ul {
        list-style-type: symbols(fixed url(one.svg) url(two.svg));

    Even more predefined counter styles

    CSS Lists 3 vastly extends the number of predefined styles available. Along with the predefined styles such as decimal, disc, circle and square that existed before, additional styles such as hebrew, thai, gujarati, disclosure-open/close, etc. are also added to the predefined list. The full list of predefined styles enabled by the spec can be seen here.

    Browser support

    Support for @counter-style and the symbols() function along with the new predefined styles landed in Firefox 33. Google Chrome hasn’t implemented @counter-styles or the symbols() function yet, but most of the new numeric and alphabetic predefined styles have been implemented. There is no support in IE so far. Support has already landed in the latest releases of Firefox for Android and in the upcoming Firefox OS 2.1, which has support for @counter-style. Support for symbols() will land in Firefox OS 2.2.


    Check out this demo page and see if @counter-style is supported in your favorite browsers.

  4. Mozilla and Web Components: Update

    Editor’s note: Mozilla has a long history of participating in standards development. The post below shows a real-time slice of how standards are debated and adopted. The goal is to update developers who are most affected by implementation decisions we make in Firefox. We are particularly interested in getting feedback from JavaScript library and framework developers.

    Mozilla has been working on Web Components — a technology encompassing HTML imports, custom elements, and shadow DOM — for a while now and testing this approach in Gaia, the frontend of Firefox OS. Unfortunately, our feedback into the standards process has not always resulted in the changes required for us to ship Web Components. Therefore we decided to reevaluate our stance with members of the developer community.

    We came up with the following tentative plan for shipping Web Components in Firefox and we would really appreciate input from the developer community as we move this forward. Web Components changes a core aspect of the Web Platform and getting it right is important. We believe the way to do that is by having the change be driven by the hard learned lessons from JavaScript library developers.

    • Mozilla will not ship an implementation of HTML Imports. We expect that once JavaScript modules — a feature derived from JavaScript libraries written by the developer community — is shipped, the way we look at this problem will have changed. We have also learned from Gaia and others, that lack of HTML Imports is not a problem as the functionality can easily be provided for with a polyfill if desired.
    • Mozilla will ship an implementation of custom elements. Exposing the lifecycle is a very important aspect for the creation of components. We will work with the standards community to use Symbol-named properties for the callbacks to prevent name collisions. We will also ensure the strategy surrounding subclassing is sound with the latest work on that front in JavaScript and that the callbacks are sufficiently capable to describe the lifecycle of elements or can at least be changed in that direction.
    • Mozilla will ship an implementation of shadow DOM. We think work needs to be done to decouple style isolation from event retargeting to make event delegation possible in frameworks and we would like to ensure distribution is sufficiently extensible beyond Selectors. E.g Gaia would like to see this ability.

    Our next steps will be working with the standards community to make these changes happen, making sure there is sufficient test coverage in web-platform-tests, and making sure the specifications become detailed enough to implement from.

    So please let us know what you think here in the comments or directly on the public-webapps standards list!

  5. Introducing the JavaScript Internationalization API

    Firefox 29 issued half a year ago, so this post is long overdue. Nevertheless I wanted to pause for a second to discuss the Internationalization API first shipped on desktop in that release (and passing all tests!). Norbert Lindenberg wrote most of the implementation, and I reviewed it and now maintain it. (Work by Makoto Kato should bring this to Android soon; b2g may take longer due to some b2g-specific hurdles. Stay tuned.)

    What’s internationalization?

    Internationalization (i18n for short — i, eighteen characters, n) is the process of writing applications in a way that allows them to be easily adapted for audiences from varied places, using varied languages. It’s easy to get this wrong by inadvertently assuming one’s users come from one place and speak one language, especially if you don’t even know you’ve made an assumption.

    function formatDate(d)
      // Everyone uses month/date/year...right?
      var month = d.getMonth() + 1;
      var date = d.getDate();
      var year = d.getFullYear();
      return month + "/" + date + "/" + year;
    function formatMoney(amount)
      // All money is dollars with two fractional digits...right?
      return "$" + amount.toFixed(2);
    function sortNames(names)
      function sortAlphabetically(a, b)
        var left = a.toLowerCase(), right = b.toLowerCase();
        if (left > right)
          return 1;
        if (left === right)
          return 0;
        return -1;
      // Names always sort alphabetically...right?

    JavaScript’s historical i18n support is poor

    i18n-aware formatting in traditional JS uses the various toLocaleString() methods. The resulting strings contained whatever details the implementation chose to provide: no way to pick and choose (did you need a weekday in that formatted date? is the year irrelevant?). Even if the proper details were included, the format might be wrong e.g. decimal when percentage was desired. And you couldn’t choose a locale.

    As for sorting, JS provided almost no useful locale-sensitive text-comparison (collation) functions. localeCompare() existed but with a very awkward interface unsuited for use with sort. And it too didn’t permit choosing a locale or specific sort order.

    These limitations are bad enough that — this surprised me greatly when I learned it! — serious web applications that need i18n capabilities (most commonly, financial sites displaying currencies) will box up the data, send it to a server, have the server perform the operation, and send it back to the client. Server roundtrips just to format amounts of money. Yeesh.

    A new JS Internationalization API

    The new ECMAScript Internationalization API greatly improves JavaScript’s i18n capabilities. It provides all the flourishes one could want for formatting dates and numbers and sorting text. The locale is selectable, with fallback if the requested locale is unsupported. Formatting requests can specify the particular components to include. Custom formats for percentages, significant digits, and currencies are supported. Numerous collation options are exposed for use in sorting text. And if you care about performance, the up-front work to select a locale and process options can now be done once, instead of once every time a locale-dependent operation is performed.

    That said, the API is not a panacea. The API is “best effort” only. Precise outputs are almost always deliberately unspecified. An implementation could legally support only the oj locale, or it could ignore (almost all) provided formatting options. Most implementations will have high-quality support for many locales, but it’s not guaranteed (particularly on resource-constrained systems such as mobile).

    Under the hood, Firefox’s implementation depends upon the International Components for Unicode library (ICU), which in turn depends upon the Unicode Common Locale Data Repository (CLDR) locale data set. Our implementation is self-hosted: most of the implementation atop ICU is written in JavaScript itself. We hit a few bumps along the way (we haven’t self-hosted anything this large before), but nothing major.

    The Intl interface

    The i18n API lives on the global Intl object. Intl contains three constructors: Intl.Collator, Intl.DateTimeFormat, and Intl.NumberFormat. Each constructor creates an object exposing the relevant operation, efficiently caching locale and options for the operation. Creating such an object follows this pattern:

    var ctor = "Collator"; // or the others
    var instance = new Intl[ctor](locales, options);

    locales is a string specifying a single language tag or an arraylike object containing multiple language tags. Language tags are strings like en (English generally), de-AT (German as used in Austria), or zh-Hant-TW (Chinese as used in Taiwan, using the traditional Chinese script). Language tags can also include a “Unicode extension”, of the form -u-key1-value1-key2-value2..., where each key is an “extension key”. The various constructors interpret these specially.

    options is an object whose properties (or their absence, by evaluating to undefined) determine how the formatter or collator behaves. Its exact interpretation is determined by the individual constructor.

    Given locale information and options, the implementation will try to produce the closest behavior it can to the “ideal” behavior. Firefox supports 400+ locales for collation and 600+ locales for date/time and number formatting, so it’s very likely (but not guaranteed) the locales you might care about are supported.

    Intl generally provides no guarantee of particular behavior. If the requested locale is unsupported, Intl allows best-effort behavior. Even if the locale is supported, behavior is not rigidly specified. Never assume that a particular set of options corresponds to a particular format. The phrasing of the overall format (encompassing all requested components) might vary across browsers, or even across browser versions. Individual components’ formats are unspecified: a short-format weekday might be “S”, “Sa”, or “Sat”. The Intl API isn’t intended to expose exactly specified behavior.

    Date/time formatting


    The primary options properties for date/time formatting are as follows:

    weekday, era
    "narrow", "short", or "long". (era refers to typically longer-than-year divisions in a calendar system: BC/AD, the current Japanese emperor’s reign, or others.)
    "2-digit", "numeric", "narrow", "short", or "long"
    hour, minute, second
    "2-digit" or "numeric"
    "short" or "long"
    Case-insensitive "UTC" will format with respect to UTC. Values like "CEST" and "America/New_York" don’t have to be supported, and they don’t currently work in Firefox.

    The values don’t map to particular formats: remember, the Intl API almost never specifies exact behavior. But the intent is that "narrow", "short", and "long" produce output of corresponding size — “S” or “Sa”, “Sat”, and “Saturday”, for example. (Output may be ambiguous: Saturday and Sunday both could produce “S”.) "2-digit" and "numeric" map to two-digit number strings or full-length numeric strings: “70” and “1970”, for example.

    The final used options are largely the requested options. However, if you don’t specifically request any weekday/year/month/day/hour/minute/second, then year/month/day will be added to your provided options.

    Beyond these basic options are a few special options:

    Specifies whether hours will be in 12-hour or 24-hour format. The default is typically locale-dependent. (Details such as whether midnight is zero-based or twelve-based and whether leading zeroes are present are also locale-dependent.)

    There are also two special properties, localeMatcher (taking either "lookup" or "best fit") and formatMatcher (taking either "basic" or "best fit"), each defaulting to "best fit". These affect how the right locale and format are selected. The use cases for these are somewhat esoteric, so you should probably ignore them.

    Locale-centric options

    DateTimeFormat also allows formatting using customized calendaring and numbering systems. These details are effectively part of the locale, so they’re specified in the Unicode extension in the language tag.

    For example, Thai as spoken in Thailand has the language tag th-TH. Recall that a Unicode extension has the format -u-key1-value1-key2-value2.... The calendaring system key is ca, and the numbering system key is nu. The Thai numbering system has the value thai, and the Chinese calendaring system has the value chinese. Thus to format dates in this overall manner, we tack a Unicode extension containing both these key/value pairs onto the end of the language tag: th-TH-u-ca-chinese-nu-thai.

    For more information on the various calendaring and numbering systems, see the full DateTimeFormat documentation.


    After creating a DateTimeFormat object, the next step is to use it to format dates via the handy format() function. Conveniently, this function is a bound function: you don’t have to call it on the DateTimeFormat directly. Then provide it a timestamp or Date object.

    Putting it all together, here are some examples of how to create DateTimeFormat options for particular uses, with current behavior in Firefox.

    var msPerDay = 24 * 60 * 60 * 1000;
    // July 17, 2014 00:00:00 UTC.
    var july172014 = new Date(msPerDay * (44 * 365 + 11 + 197));

    Let’s format a date for English as used in the United States. Let’s include two-digit month/day/year, plus two-digit hours/minutes, and a short time zone to clarify that time. (The result would obviously be different in another time zone.)

    var options =
      { year: "2-digit", month: "2-digit", day: "2-digit",
        hour: "2-digit", minute: "2-digit",
        timeZoneName: "short" };
    var americanDateTime =
      new Intl.DateTimeFormat("en-US", options).format;
    print(americanDateTime(july172014)); // 07/16/14, 5:00 PM PDT

    Or let’s do something similar for Portuguese — ideally as used in Brazil, but in a pinch Portugal works. Let’s go for a little longer format, with full year and spelled-out month, but make it UTC for portability.

    var options =
      { year: "numeric", month: "long", day: "numeric",
        hour: "2-digit", minute: "2-digit",
        timeZoneName: "short", timeZone: "UTC" };
    var portugueseTime =
      new Intl.DateTimeFormat(["pt-BR", "pt-PT"], options);
    // 17 de julho de 2014 00:00 GMT

    How about a compact, UTC-formatted weekly Swiss train schedule? We’ll try the official languages from most to least popular to choose the one that’s most likely to be readable.

    var swissLocales = ["de-CH", "fr-CH", "it-CH", "rm-CH"];
    var options =
      { weekday: "short",
        hour: "numeric", minute: "numeric",
        timeZone: "UTC", timeZoneName: "short" };
    var swissTime =
      new Intl.DateTimeFormat(swissLocales, options).format;
    print(swissTime(july172014)); // Do. 00:00 GMT

    Or let’s try a date in descriptive text by a painting in a Japanese museum, using the Japanese calendar with year and era:

    var jpYearEra =
      new Intl.DateTimeFormat("ja-JP-u-ca-japanese",
                              { year: "numeric", era: "long" });
    print(jpYearEra.format(july172014)); // 平成26年

    And for something completely different, a longer date for use in Thai as used in Thailand — but using the Thai numbering system and Chinese calendar. (Quality implementations such as Firefox’s would treat plain th-TH as th-TH-u-ca-buddhist-nu-latn, imputing Thailand’s typical Buddhist calendar system and Latin 0-9 numerals.)

    var options =
      { year: "numeric", month: "long", day: "numeric" };
    var thaiDate =
      new Intl.DateTimeFormat("th-TH-u-nu-thai-ca-chinese", options);
    print(thaiDate.format(july172014)); // ๒๐ 6 ๓๑

    Calendar and numbering system bits aside, it’s relatively simple. Just pick your components and their lengths.

    Number formatting


    The primary options properties for number formatting are as follows:

    "currency", "percent", or "decimal" (the default) to format a value of that kind.
    A three-letter currency code, e.g. USD or CHF. Required if style is "currency", otherwise meaningless.
    "code", "symbol", or "name", defaulting to "symbol". "code" will use the three-letter currency code in the formatted string. "symbol" will use a currency symbol such as $ or £. "name" typically uses some sort of spelled-out version of the currency. (Firefox currently only supports "symbol", but this will be fixed soon.)
    An integer from 1 to 21 (inclusive), defaulting to 1. The resulting string is front-padded with zeroes until its integer component contains at least this many digits. (For example, if this value were 2, formatting 3 might produce “03”.)
    minimumFractionDigits, maximumFractionDigits
    Integers from 0 to 20 (inclusive). The resulting string will have at least minimumFractionDigits, and no more than maximumFractionDigits, fractional digits. The default minimum is currency-dependent (usually 2, rarely 0 or 3) if style is "currency", otherwise 0. The default maximum is 0 for percents, 3 for decimals, and currency-dependent for currencies.
    minimumSignificantDigits, maximumSignificantDigits
    Integers from 1 to 21 (inclusive). If present, these override the integer/fraction digit control above to determine the minimum/maximum significant figures in the formatted number string, as determined in concert with the number of decimal places required to accurately specify the number. (Note that in a multiple of 10 the significant digits may be ambiguous, as in “100” with its one, two, or three significant digits.)
    Boolean (defaulting to true) determining whether the formatted string will contain grouping separators (e.g. “,” as English thousands separator).

    NumberFormat also recognizes the esoteric, mostly ignorable localeMatcher property.

    Locale-centric options

    Just as DateTimeFormat supported custom numbering systems in the Unicode extension using the nu key, so too does NumberFormat. For example, the language tag for Chinese as used in China is zh-CN. The value for the Han decimal numbering system is hanidec. To format numbers for these systems, we tack a Unicode extension onto the language tag: zh-CN-u-nu-hanidec.

    For complete information on specifying the various numbering systems, see the full NumberFormat documentation.


    NumberFormat objects have a format function property just as DateTimeFormat objects do. And as there, the format function is a bound function that may be used in isolation from the NumberFormat.

    Here are some examples of how to create NumberFormat options for particular uses, with Firefox’s behavior. First let’s format some money for use in Chinese as used in China, specifically using Han decimal numbers (instead of much more common Latin numbers). Select the "currency" style, then use the code for Chinese renminbi (yuan), grouping by default, with the usual number of fractional digits.

    var hanDecimalRMBInChina =
      new Intl.NumberFormat("zh-CN-u-nu-hanidec",
                            { style: "currency", currency: "CNY" });
    print(hanDecimalRMBInChina.format(1314.25)); // ¥ 一,三一四.二五

    Or let’s format a United States-style gas price, with its peculiar thousandths-place 9, for use in English as used in the United States.

    var gasPrice =
      new Intl.NumberFormat("en-US",
                            { style: "currency", currency: "USD",
                              minimumFractionDigits: 3 });
    print(gasPrice.format(5.259)); // $5.259

    Or let’s try a percentage in Arabic, meant for use in Egypt. Make sure the percentage has at least two fractional digits. (Note that this and all the other RTL examples may appear with different ordering in RTL context, e.g. ٤٣٫٨٠٪ instead of ٤٣٫٨٠٪.)

    var arabicPercent =
      new Intl.NumberFormat("ar-EG",
                            { style: "percent",
                              minimumFractionDigits: 2 }).format;
    print(arabicPercent(0.438)); // ٤٣٫٨٠٪

    Or suppose we’re formatting for Persian as used in Afghanistan, and we want at least two integer digits and no more than two fractional digits.

    var persianDecimal =
      new Intl.NumberFormat("fa-AF",
                            { minimumIntegerDigits: 2,
                              maximumFractionDigits: 2 });
    print(persianDecimal.format(3.1416)); // ۰۳٫۱۴

    Finally, let’s format an amount of Bahraini dinars, for Arabic as used in Bahrain. Unusually compared to most currencies, Bahraini dinars divide into thousandths (fils), so our number will have three places. (Again note that apparent visual ordering should be taken with a grain of salt.)

    var bahrainiDinars =
      new Intl.NumberFormat("ar-BH",
                            { style: "currency", currency: "BHD" });
    print(bahrainiDinars.format(3.17)); // د.ب.‏ ٣٫١٧٠



    The primary options properties for collation are as follows:

    "sort" or "search" (defaulting to "sort"), specifying the intended use of this Collator. (A search collator might want to consider more strings equivalent than a sort collator would.)
    "base", "accent", "case", or "variant". This affects how sensitive the collator is to characters that have the same “base letter” but have different accents/diacritics and/or case. (Base letters are locale-dependent: “a” and “ä” have the same base letter in German but are different letters in Swedish.) "base" sensitivity considers only the base letter, ignoring modifications (so for German “a”, “A”, and “ä” are considered the same). "accent" considers the base letter and accents but ignores case (so for German “a” and “A” are the same, but “ä” differs from both). "case" considers the base letter and case but ignores accents (so for German “a” and “ä” are the same, but “A” differs from both). Finally, "variant" considers base letter, accents, and case (so for German “a”, “ä, “ä” and “A” all differ). If usage is "sort", the default is "variant"; otherwise it’s locale-dependent.
    Boolean (defaulting to false) determining whether complete numbers embedded in strings are considered when sorting. For example, numeric sorting might produce "F-4 Phantom II", "F-14 Tomcat", "F-35 Lightning II"; non-numeric sorting might produce "F-14 Tomcat", "F-35 Lightning II", "F-4 Phantom II".
    "upper", "lower", or "false" (the default). Determines how case is considered when sorting: "upper" places uppercase letters first ("B", "a", "c"), "lower" places lowercase first ("a", "c", "B"), and "false" ignores case entirely ("a", "B", "c"). (Note: Firefox currently ignores this property.)
    Boolean (defaulting to false) determining whether to ignore embedded punctuation when performing the comparison (for example, so that "biweekly" and "bi-weekly" compare equivalent).

    And there’s that localeMatcher property that you can probably ignore.

    Locale-centric options

    The main Collator option specified as part of the locale’s Unicode extension is co, selecting the kind of sorting to perform: phone book (phonebk), dictionary (dict), and many others.

    Additionally, the keys kn and kf may, optionally, duplicate the numeric and caseFirst properties of the options object. But they’re not guaranteed to be supported in the language tag, and options is much clearer than language tag components. So it’s best to only adjust these options through options.

    These key-value pairs are included in the Unicode extension the same way they’ve been included for DateTimeFormat and NumberFormat; refer to those sections for how to specify these in a language tag.


    Collator objects have a compare function property. This function accepts two arguments x and y and returns a number less than zero if x compares less than y, 0 if x compares equal to y, or a number greater than zero if x compares greater than y. As with the format functions, compare is a bound function that may be extracted for standalone use.

    Let’s try sorting a few German surnames, for use in German as used in Germany. There are actually two different sort orders in German, phonebook and dictionary. Phonebook sort emphasizes sound, and it’s as if “ä”, “ö”, and so on were expanded to “ae”, “oe”, and so on prior to sorting.

    var names =
      ["Hochberg", "Hönigswald", "Holzman"];
    var germanPhonebook = new Intl.Collator("de-DE-u-co-phonebk");
    // as if sorting ["Hochberg", "Hoenigswald", "Holzman"]:
    //   Hochberg, Hönigswald, Holzman
    print(names.sort(", "));

    Some German words conjugate with extra umlauts, so in dictionaries it’s sensible to order ignoring umlauts (except when ordering words differing only by umlauts: schon before schön).

    var germanDictionary = new Intl.Collator("de-DE-u-co-dict");
    // as if sorting ["Hochberg", "Honigswald", "Holzman"]:
    //   Hochberg, Holzman, Hönigswald
    print(names.sort(", "));

    Or let’s sort a list Firefox versions with various typos (different capitalizations, random accents and diacritical marks, extra hyphenation), in English as used in the United States. We want to sort respecting version number, so do a numeric sort so that numbers in the strings are compared, not considered character-by-character.

    var firefoxen =
      ["FireFøx 3.6",
       "Fire-fox 1.0",
       "Firefox 29",
       "FÍrefox 3.5",
       "Fírefox 18"];
    var usVersion =
      new Intl.Collator("en-US",
                        { sensitivity: "base",
                          numeric: true,
                          ignorePunctuation: true });
    // Fire-fox 1.0, FÍrefox 3.5, FireFøx 3.6, Fírefox 18, Firefox 29
    print(firefoxen.sort(", "));

    Last, let’s do some locale-aware string searching that ignores case and accents, again in English as used in the United States.

    // Comparisons work with both composed and decomposed forms.
    var decoratedBrowsers =
       "A\u0362maya",  // A͢maya
       "CH\u035Brôme", // CH͛rôme
       "o\u0323pERA",  // ọpERA
       "I\u0352E",     // I͒E
    var fuzzySearch =
      new Intl.Collator("en-US",
                        { usage: "search", sensitivity: "base" });
    function findBrowser(browser)
      function cmp(other)
        return, other) === 0;
      return cmp;
    print(decoratedBrowsers.findIndex(findBrowser("Firêfox"))); // 2
    print(decoratedBrowsers.findIndex(findBrowser("Safåri")));  // 3
    print(decoratedBrowsers.findIndex(findBrowser("Ãmaya")));   // 0
    print(decoratedBrowsers.findIndex(findBrowser("Øpera")));   // 4
    print(decoratedBrowsers.findIndex(findBrowser("Chromè")));  // 1
    print(decoratedBrowsers.findIndex(findBrowser("IË")));      // 5

    Odds and ends

    It may be useful to determine whether support for some operation is provided for particular locales, or to determine whether a locale is supported. Intl provides supportedLocales() functions on each constructor, and resolvedOptions() functions on each prototype, to expose this information.

    var navajoLocales =
      Intl.Collator.supportedLocalesOf(["nv"], { usage: "sort" });
    print(navajoLocales.length > 0
          ? "Navajo collation supported"
          : "Navajo collation not supported");
    var germanFakeRegion =
      new Intl.DateTimeFormat("de-XX", { timeZone: "UTC" });
    var usedOptions = germanFakeRegion.resolvedOptions();
    print(usedOptions.locale);   // de
    print(usedOptions.timeZone); // UTC

    Legacy behavior

    The ES5 toLocaleString-style and localeCompare functions previously had no particular semantics, accepted no particular options, and were largely useless. So the i18n API reformulates them in terms of Intl operations. Each method now accepts additional trailing locales and options arguments, interpreted just as the Intl constructors would do. (Except that for toLocaleTimeString and toLocaleDateString, different default components are used if options aren’t provided.)

    For brief use where precise behavior doesn’t matter, the old methods are fine to use. But if you need more control or are formatting or comparing many times, it’s best to use the Intl primitives directly.


    Internationalization is a fascinating topic whose complexity is bounded only by the varied nature of human communication. The Internationalization API addresses a small but quite useful portion of that complexity, making it easier to produce locale-sensitive web applications. Go use it!

    (And a special thanks to Norbert Lindenberg, Anas El Husseini, Simon Montagu, Gary Kwong, Shu-yu Guo, Ehsan Akhgari, the people of, and anyone I may have forgotten [sorry!] who provided feedback on this article or assisted me in producing and critiquing the examples. The English and German examples were the limit of my knowledge, and I’d have been completely lost on the other examples without their assistance. Blame all remaining errors on me. Thanks again!)

  6. CSS Variables in Firefox Nightly

    As reported by Cameron McCormack, Firefox Nightly (version 29) now supports CSS variables. You can get a quick overview in this short screencast:

    You can define variables in a context with a var- prefix and then implement them using the var() instruction. For example:

    :root {
      var-companyblue: #369;
      var-lighterblue: powderblue;
    h1 {
      color: var(companyblue);
    h2 {
      color: var(lighterblue);
    <h1>Header on page</h1>
    <h2>Subheader on page</h2>

    This defines the two variables companyblue and lighterblue for the root element of the document which results in (you can try it here using Firefox Nightly):

    Variables are scoped, which means you can overwrite them:

    :root {
      var-companyblue: #369;
      var-lighterblue: powderblue;
    .partnerbadge {
      var-companyblue: #036;
      var-lighterblue: #cfc;
    h1 {
      color: var(companyblue);
    h2 {
      color: var(lighterblue);
    <h1>Header on page</h1>
    <h2>Subheader on page</h2>
    <div class="partnerbadge">
      <h1>Header on page</h1>
      <h2>Subheader on page</h2>

    Using these settings, headings inside an element with a class of partnerbadge will now get the other blue settings:

    Variables can be any value you want to define and you can use them like any other value, for example inside a calc() calculation. You can also reset them to other values, for example inside a media query. This example shows many of these possibilities.

    :root {
      var-companyblue: #369;
      var-lighterblue: powderblue;
      var-largemargin: 20px;
      var-smallmargin: calc(var(largemargin) / 2);
      var-borderstyle: 5px solid #000;
      var-headersize: 24px;
    .partnerbadge {
      var-companyblue: #036;
      var-lighterblue: #369;
      var-headersize: calc(var(headersize)/2);
      transition: 0.5s;
    @media (max-width: 400px) {
      .partnerbadge {
         var-borderstyle: none;
         background: #eee;
    /* Applying the variables */
    body {font-family: 'open sans', sans-serif;}
    h1 {
      color: var(companyblue);
      margin: var(largemargin) 0;
      font-size: var(headersize);
    h2 {
      color: var(lighterblue);
      margin: var(smallmargin) 0;
      font-size: calc(var(headersize) - 5px);
    .partnerbadge {
      padding: var(smallmargin) 10px;
      border: var(borderstyle);

    Try resizing the window to less than 400 pixels to see the mediaQuery change in action.

    An initial implementation of CSS Variables has just landed in Firefox Nightly, which is currently at version 29 and after the February 3 merge, in Firefox Aurora. There are still a few parts of the specification which still need to be supported before the can go into the release cycle of Firefox Beta and Firefox. Cameron has the details on that:

    The only part of the specification that has not yet been implemented is the CSSVariableMap part, which provides an object that behaves like an ECMAScript Map, with get, set and other methods, to get the values of variables on a CSSStyleDeclaration. Note however that you can still get at them in the DOM by using the getPropertyValue and setProperty methods, as long as you use the full property names such as "var-theme-colour-1".

    The work for this feature was done in bug 773296, and my thanks to David Baron for doing the reviews there and to Emmanuele Bassi who did some initial work on the implementation. If you encounter any problems using the feature, please file a bug!

    For now, have fun playing with CSS variables in Nightly and tell us about issues you find.

  7. Web Payments with PaySwarm: Assets and Listings (part 2 of 3)

    The Promise of Web Payments

    The first article in this series on PaySwarm outlined how the technology is designed to transmit and receive funds with the same ease as sending and receiving an email. It went on to explain how taking the tools that have been traditionally only available to banks, Wall Street, and large corporations and making them available to everyone can reshape financial systems in a very positive way. The goal is not just one-click payments, but also to enable crowdfunding innovation, help Web developers earn a living through the Web, boost funding rounds for start-ups, and more.

    This article is the second in a three part series on buying and selling content using the PaySwarm specification. It will review some basic concepts covered in the first article and then explain how to list things for sale using PaySwarm.

    Review: Web Keys and JSON-LD

    As explained in part one, the Web Keys specification provides a simple, decentralized identity solution for the Web based on public key cryptography. It enables Web applications to send messages that are both protected from snooping and verifiable via digital signatures.

    The messages are marked up using the JavaScript Object Notation for Linking Data (JSON-LD). As the name suggests, JSON-LD is a way of expressing Linked Data in JSON. Both HTML documents and JSON-LD documents describe things and have links to other documents on the Web. The primary benefit with JSON-LD is that the entire document is understandable by a machine to the point that it can extract and perform basic reasoning without placing a huge burden on the developer writing the JSON markup.

    Web Keys coupled with JSON-LD provide the underlying messaging and product markup mechanism used by PaySwarm to perform Web Payments.

    Decentralized Products and Services

    One design requirement of a Web Payments system is that the identity mechanism (Web Keys) must be decentralized. Another requirement is that the data markup mechanism (JSON-LD) must be capable of expressing decentralized resources like people, places, events, goods/services, and a variety of other data that will likely exist on 3rd party websites.

    Similarly, PaySwarm does not require that products and services be listed in a central location on the Web. Instead, it allows content creators and developers to be in control of their own product descriptions and prices in addition to giving them the option to delegate this responsibility to an App Store or large retail website. PaySwarm has the following requirements when it comes to listing products and services for sale:

    • The listings must be able to be decentralized, which reduces the possibility of monopolistic behavior among retailers.
    • The product being sold must be separable from the terms under which the sale occurs, enabling different prices to be associated with different licenses, affiliate sales, and business models like daily deals.
    • The creator of the product must be able to specify restrictions on pricing, resellers, validity periods, and a variety of other properties associated with the sale of the product. This ensures that the product creator is in control of her product at all times.
    • It must support decentralized extensibility, which enables applications to add application-specific data to the product description and terms of sale.
    • It must be secure, such that the risk of tampering with product descriptions and prices is mitigated.
    • It must be non-repudiable, such that the creator of the listing cannot dispute the fact that they created it.

    Assets and Listings

    There are two concepts that are core to understanding how products and services are listed for sale via PaySwarm.

    The first is the asset. An asset is a description of a product or service. Examples of assets include web pages, ebooks, groceries, concert tickets, dog walking services, donations, rights to transmit on a particular radio frequency band, and invoices for work performed. In general, anything of value can be modeled as an asset.

    An asset typically describes something to be sold, who created it, a set of restrictions on selling it, and a validity period. Since the asset is expressed using JSON-LD, a number of other application-specific properties can be associated with it. For example, a 3D printing store could include the dimensions of the asset when physically printed, the materials to be used to print the asset, and a set of assembly instructions. Upon purchase of the asset, a digital receipt with a description of the asset is generated. This receipt can be given to a printer to produce a physical representation of the asset.

    The second concept that is key to understanding how products and services are sold in PaySwarm is the listing. A listing is a description of the specific terms under which an asset is offered for sale. These terms include: the exact asset being sold, the license that will be associated with the purchase, the list of people or organizations to be paid for the asset, and the validity period for the listing. Like an asset, a listing may include other application-specific properties.

    This tutorial elaborates on creating an asset and listing and publishing them to a website, as shown briefly in the video below:

    Code from the payswarm.js node module will be used throughout this tutorial. Specifically, it utilizes the asset publishing example. The payment processor will be the PaySwarm Developer Sandbox and asset hosting service will be the PaySwarm Developer Listing Service.

    Creating an Asset

    As mentioned previously, an asset is described using JSON-LD. An asset is built in a programming language the same way one would build data to be published as a JSON document. Typically, this involves using an object (in JavaScript), a dictionary (in Python), an associative array (in PHP), or a hash (in Ruby). Here is the code to create an asset in JavaScript. Pay particular attention to the comments as they will explain what each key-value pair means. Let’s try a simple example first:

    // create a PaySwarm asset
    var asset = {
      // the @context is a hint to a JSON-LD processor on how to interpret the
      // key-value pairs in the document
      '@context': '',
      // this is the global identifier for the asset
      id: '',
      type: ['Asset'],
      // this is the person who created the asset
      creator: { fullName: 'Developer Joe' },
      title: 'Mozilla Hacks Demo Asset',
      // this is typically the link to the paid content
      assetContent: '',
      // this is the entity that has rights to the asset (or owns it)
      assetProvider: ''

    The code above is a working example of an asset description in about seven lines of code (without comments). It describes an asset called Mozilla Hacks Demo Asset, which can be sold by anyone. The idea is that you would express this data on a website, either as a separate JSON-LD document or directly in an HTML5 page using RDFa markup. Any PaySwarm-compatible payment processor could then get your document and know exactly what you have created and what the sale restrictions are.

    Let’s take a look at a more complex example. The asset below specifies a song with a twist on how it can be sold. It can be resold by anyone, with 80% of the proceeds, or a minimum of $0.50, going to the band that created the song.

    // create a PaySwarm asset
    var asset = {
      // the @context is a hint to a JSON-LD processor on how to interpret the
      // key-value pairs in the document
      '@context': '',
      // this is the global identifier for the asset
      id: '',
      type: ['Asset', 'schema:MusicRecording'],
      // this is the person who created the asset
      creator: { fullName: 'The Webdevs' },
      title: 'HTML5 Me, Baby',
      // this is the link to the paid content
      assetContent: '',
      // this is the entity that has rights to the asset (the artist in this case)
      assetProvider: '',
      // these are restrictions under which the asset can be sold
      listingRestrictions: {
        // validity dates so that the price information below doesn't
        // last forever (you may want to change your price)
        validFrom: payswarm.w3cDate(validFrom),
        validUntil: payswarm.w3cDate(validUntil),
        payee: [{
          // this is an entity that should be paid when the asset is sold
          type: 'Payee',
          // when the asset is sold, put the money for the asset provider here
          destination: '',
          // the name of the group determines how this particular payee's rate comes
          // out of the total price of the sale
          payeeGroup: ['assetProvider'],
          // next 6 lines: the greater of 80% of the vendor's price or $0.50 should go
          // to the content creator
          payeeRate: '80',
          payeeRateType: 'Percentage',
          payeeApplyType: 'ApplyInclusively',
          payeeApplyGroup: ['vendor'],
          minimumAmount: '0.50',
          currency: 'USD',
          // show this to the buyer as one of the line items in the digital receipt
          comment: 'Payment to The Webdevs for creating the HTML5 Me, Baby song'
        payeeRule: [{
          // next 2 lines: allow the payment processor (the PaySwarm Authority) to add
          // a fee for processing the transaction
          type: 'PayeeRule',
          payeeGroupPrefix: ['authority']
        }, {
          // next 4 lines: vendors can only specify a flat fee for reselling the song
          // from their website
          type: 'PayeeRule',
          payeeGroup: ['vendor'],
          payeeRateType: 'FlatAmount',
          payeeApplyType: 'ApplyExclusively'

    The code above is a more comprehensive example of an asset description in approximately 30 lines of code (without comments). It describes an asset, which is a song, called HTML5 Me, Baby that can be sold by anyone who complies with the other restrictions set forth. A content creator who describes an asset this way on the web creates an incentive for their fan base to blog about and resell the content on their personal blogs. Imagine a WordPress plugin that allows you to review and resell your favorite bands songs directly from your blog. The artist is always guaranteed to get paid, regardless of which blog sells it or which PaySwarm Authority processes the payment. In this particular example, the asset provider has enabled their fans to take a 20% cut of the sale.

    When a sale of the song above occurs, 80% of the sale price, or a minimum of $0.50 USD, is transferred to the asset provider. The PaySwarm Authority may add a fee for processing payment for the asset. A vendor may set a flat price for the song. For example, if the song is sold for $1.00, then $0.80 goes to The Webdevs (80% of final price restriction kicks in). If the song is sold for $0.60 USD, then $0.50 goes to The WebDevs ($0.50 USD minimum restriction kicks in). As you can see, the listing restrictions provide a great deal of power to the asset provider to specify exactly how their product should be sold.

    Once the asset has been created, the developer can then use her private key to digitally sign the asset:

    payswarm.sign(asset, {
      privateKeyPem: privateKey.privateKeyPem
    }, function(err, signedAsset) {
      // do something with the signed asset

    Digitally signing the asset ensures that no one can change any of the information associated with it. This is important because, at a minimum, a content creator wouldn’t want to be removed from the list of people who should be paid when the asset is sold. The digital signature also guarantees, to a buyer, that the content creator did in fact describe the asset and its sale restrictions as seen.

    The next step in preparing the asset for sale is to create a special identifier for the asset so that it can be accurately referenced by the listing. This identifier is called a cryptographic hash and is generated using the payswarm.js library’s hash() function:

    // generate a hash for the signed asset
    payswarm.hash(signedAsset, function(err, assetHash) {
      // do something with the signed asset hash

    Once we have the signed asset and the signed asset hash, we can create the listing that will be used to purchase the asset.

    Creating a Listing

    As mentioned earlier in this article, a listing is a description of the terms under which an asset is offered for sale. A listing, like an asset, is expressed in JSON-LD. The listing below specifies which license to associate with the asset being sold as well as who should get paid for the sale of the asset. It also specifies restrictions on certain payees, such as how much a PaySwarm Authority can charge for processing a payment. Pay particular attention to the comments above each line as they explain what the line does:

    // create the listing
    var listing = {
      // the @context is a hint to a JSON-LD processor on how to interpret the
      // key-value pairs in the document
      '@context': '',
      // this is the identifier for the listing
      id: '',
      type: ['Listing'],
      // the identity offering the item for sale is The WebDevs
      vendor: '',
      payee: [{
        type: 'Payee',
        // payment should be deposited into this financial account
        destination: '',
        // this is used to determine how fees are applied to the final price
        payeeGroup: ['vendor'],
        // the next 4 lines: Sell the song for $1.00
        payeeRateType: 'FlatAmount',
        payeeRate: '1.00',
        currency: 'USD',
        payeeApplyType: 'ApplyExclusively',
        // this should be displayed for the line item in the digital receipt
        comment: 'Payment to The Webdevs for creating the HTML5 Me, Baby song'
      // the next 6 lines: The payment processor cannot take more than 5% of
      // the total sale price.
      payeeRule : [{
        type: 'PayeeRule',
        payeeGroupPrefix: ['authority'],
        payeeRateType: 'Percentage',
        maximumPayeeRate: '5',
        payeeApplyType: 'ApplyInclusively'
      // this is the ID of the asset being sold
      asset: '',
      assetHash: assetHash,
      // this is the license that should be associated with the asset upon sale
      license: '',
      licenseHash: 'urn:sha256:' +
      // the offer of sale is only valid between these two times
      validFrom: payswarm.w3cDate(validFrom),
      validUntil: payswarm.w3cDate(validUntil)

    The code above states that the song HTML5 Me, Baby is for sale for $1.00 and a payment processor can’t charge more than 5% of the total sale price in transaction processing fees. The song is licensed for personal use, with the full text of the license available at the provided URL. Once the listing has been created, it must be digitally signed:

    payswarm.sign(listing, {
      privateKeyPem: cfg.publicKey.privateKeyPem
    }, function(err, signedListing) {
      // do something with the signed listing

    The listing is digitally signed for the same reason that the asset is: to prevent tampering and for non-repudiation. Once both the asset and the listing are signed, they are ready to be published. This could be done by publishing them as two different documents or as a single document. The easiest way to publish them as a single document in JSON-LD is to use the ‘@graph’ keyword, which essentially states that you want to combine two independent objects into the same JSON-LD document.

    var assetAndListing = {
      '@context': '',
      '@graph': [signedAsset, signedListing]

    Publishing an Asset and Listing

    The final step in the publication process is to take the signed asset and listing document and publish it to the Web. Since PaySwarm is decentralized, we can publish our signed asset and listing to any website. Since the assets are digitally signed, the website doesn’t have to be protected by Transport Layer Security (TLS), more commonly known as HTTPS. The digital signature guarantees that no one can modify the data without invalidating the signature on the asset and listing. In the following example, we use the payswarm.js library’s postJsonLd() function to upload the asset and listing to a public PaySwarm listing service.

    var url = '';
    payswarm.postJsonLd(url, assetAndListing, function(err, result) {
      // if result is a 200, then your listing has been published

    After the asset and listing have been uploaded, the asset can be purchased by any PaySwarm client. Note that while the asset and listing information was expressed as JSON-LD, it could have just as easily been published as HTML5+RDFa. PaySwarm Authorities are capable of interpreting both JSON-LD and RDFa. While we won’t go into the details in this blog post, a future blog post will address how to take the asset and listing Linked Data expressed above and publish it as HTML5+RDFa.

    Next: Purchasing an Asset

    This is the second article in a three part series on developing PaySwarm-aware Web applications to engage in commerce. The next article will explain how to purchase the asset that was created in this tutorial and retrieve the digital receipt of the sale.

  8. Web Payments with PaySwarm: Identity (part 1 of 3)

    The Promise of Web Payments

    The Web has fundamentally transformed the way we publish and interact with information. However, the way we reward people for creating that content has not changed. The Web’s foundation was not built to transmit and receive funds with the same ease as sending and receiving an email.

    Making payments on the Web simpler and more accessible has more than superficial advantages. By taking the tools that have been traditionally only available to banks, Wall Street, and large corporations and making them available to everyone, we can reshape the world’s financial system. The goal is not to just simply enable one click payments, but also to enable crowdfunding innovation, help Web developers earn a living through the Web, boost funding rounds for world-changing start-ups, and more.

    Whilst bringing new or powerful tools to the general public will breed competition and innovation, open Web payments can also bring about much more basic and societal changes. 2.5 billion people around the world don’t have bank accounts and thus have no ability to save money due to banking corruption and/or high fees. When a family has no way of saving for the future, it greatly limits their chances of pulling themselves out of poverty. The promise of Web payments is about more than just an exciting future, it is about a more egalitarian one.

    So, how do we get from where we are today, to a future where sending an receiving funds on the Web is as easy as clicking a button?

    Web Payment Requirements

    One of the primary drivers of innovation on the Web is that it is decentralized. You don’t have to ask permission to publish your creation. Open standards such as TCP/IP, HTTP, HTML, CSS, JavaScript, and JSON ensure interoperability between applications. Thus, a solution for payments on the Web must have at least the following traits:

    • It must be decentralized.
    • It must be an open, patent and royalty-free standard.
    • It must be designed to work with Web architecture like URLs, HTTP, and other Web standards.
    • It must allow anyone to implement the standard and interoperate with others that implement the standard.

    In addition to these basic characteristics of a successful Web technology, a successful Web Payments technology must also do the following:

    • It must enable choice among customers, vendors, and payment processors, in order to drive healthy market competition.
    • It must be extensible in a decentralized way, allowing application-specific extensions to the core protocol without coordination.
    • It must be flexible enough to perform payments as well as support higher order economic behaviors like crowdfunding and executing legal contracts.
    • It must be secure, using the latest security best practices to protect the entire system from attack.
    • It must be currency agnostic, supporting not only the US Dollar, the Euro, and the Japanese Yen, but also supporting virtual currencies like Bitcoin and Ven.
    • It must be easy to develop for and integrate into the Web.

    Fortunately, the Web Payments group at the World Wide Web Consortium (W3C) has created a set of specifications called PaySwarm that addresses all of the requirements listed above. The group is currently in contact with Mozilla and is trying to converge with the mozPay() API as well as Persona to provide a truly excellent Web Payments experience. There is even a commercial launch of the technology using real money, which means you can use it to send and receive funds today.

    This is the first blog post in a three part series that will explore the PaySwarm specifications and demonstrate, using code examples and video, how to build a fully functional PaySwarm client in Node.js.

    Decentralized Identity

    A decentralized payment system for the Web means that, ultimately, the identity mechanism must be decentralized as well. There are a number of identity solutions today that are decentralized. OpenID, WebID, Web Keys, and BrowserID/Persona are among some of the more well-known, decentralized identity mechanisms that are designed for the Web. Identity for Web Payments brings in an additional set of requirements on top of the normal set of requirements for a Web identity solution. The following is a brief list of these requirements:

    • It must be decentralized.
    • It must support discoverability by using a resolvable address, like a URL or email address.
    • It must support, with authorization, arbitrary machine-readable information being attached to the identity by 3rd parties.
    • It must be able to provide both public and private data to external sites, based on who is accessing the resource.
    • It must provide a secure digital signature and encryption mechanism.

    The solution we ended up settling on is called Web Keys. It enables secure, decentralized, discoverable, controlled access to arbitrary machine-readable information associated with an identity. This identity mechanism and the functionality it enables is at the heart of the PaySwarm work.

    Web Keys

    For reasons that we don’t have time to go into in this blog post, OpenID, WebID, and OAuth were considered but ultimately did not make the cut because they don’t provide at least one of the features above. At the time, BrowserID, which was later rebranded as Persona, was still too early in the design stage to consider as a viable solution. The Web Payments group is currently working with the Persona team to see if all of the features above could be integrated into Persona to support Web Payments, but it is not certain that it will happen.

    In the end, we had to create a minimal identity solution for Web Payments because one did not exist that met all of the requirements above. So, let’s dive into using the Web Keys technology to establish an identity online for the purposes of making decentralized Web Payments.

    This tutorial will use code from the payswarm.js node module. Specifically, it will use code snippets from the Web Key registration example. The tutorial will also use the PaySwarm Developer Sandbox as our Web Key host and payment processor.

    The following video shows what we will be building throughout the rest of this tutorial:

    Generating a Key

    The Web Keys specification is built on top of a technology called public-key cryptography. We don’t have time to go into how this technology works in detail in this blog post, so what follows is a quick overview for those that are unfamiliar with the technology.

    Public key cryptography is used to secure traffic on the Web. If you have ever seen a padlock icon when doing a purchase online, you’re using public key cryptography. When you generate these keys, you are given two of them: a public key and a private key. Together the keys can be used to do two major operations that are important in Web Payments. The first is that the private key can be used to digitally sign a purchase request. The public key can be used to verify the digital signature on the purchase request, proving that you, and only you, wish to make a purchase. The second is that the public key can be used to encrypt information so that only the holder of the private key can read that information. Together these keys can be used to prove your identity online to others and to encrypt messages sent to you so that only you can read them.

    To generate the pair of keys in PaySwarm, you can do the following:

    var payswarm = require('payswarm-client');
    payswarm.createKeyPair({keySize: 2048}, function(err, pair) {
      if(err) {
        console.log('Oops, something went wrong:', err);
      else {
        var publicKeyPem = pair.publicKey;
        var privateKeyPem = pair.privateKey;

    In the code above, the PaySwarm library’s createKeyPair() function is used to create a 2048-bit key pair. The more bits that are used for the key pair, the harder it is to do a brute-force attack against it. At present (April 2013), a key length of 2048 bits is considered very secure for PaySwarm transactions. You should never use a key length below 2048 bits; it should be rejected by the financial network. Once the key pair is created, both the public key and the private key are returned to you. It is up to the application to then store the keys. The private key must be stored very securely, preferably encrypting the key with a long passphrase before writing it to disk. The public key should be published in a way that other people can use it to verify your digital signatures and send encrypted data to you that only you will be able to read. The process of publishing the public key to the web will be covered in the next section.

    Registering a Key

    The Web Keys specification details how an application can associate a public key with an identity. Associating a Web Key with an identity consists of a multi-step process:

    1. Discover the Web Key server’s key registration base URL.
    2. Generate a key registration URL.
    3. Register the key via a web browser.

    The steps above are demonstrated in the code snippet below:

      function(callback) {
        // Step #1: Fetch the Web Keys endpoint from the PaySwarm Authority.
        payswarm.getWebKeysConfig('', callback);
      }, function(endpoints, callback) {
        // Step #2: Generate the key registration URL
        var registrationUrl = URL.parse(endpoints.publicKeyService, true, true);
        registrationUrl.query['public-key'] = publicKeyPem;
        registrationUrl.query['response-nonce'] =
          new Date().getTime().toString(16);
        registrationUrl = URL.format(registrationUrl);
        callback(null, registrationUrl)
      function(registrationUrl, callback) {
        // Step #3: Register the key via a web browser.
        console.log('Register your key here: ', registrationUrl);

    A PaySwarm client running the code above will fetch the Web Keys configuration for the site first. The URL used when retrieving the Web Keys configuration via the getWebKeysConfig() function will be The result is a JSON-LD document, which can be used as a JSON document. The JSON key that holds the registration endpoint is publicKeyService. A registration URL is then constructed by adding URL parameters to the registration endpoint URL. First, the public key that was generated by the application is added. Then, a single-use response nonce is added, which is used to prevent replay attacks. The response from the server will include the response nonce, so be sure to store it somewhere. It is very important that the response nonce is tracked by the application and never used more than once. If an application detects that a response nonce has been used more than once when receiving a message from the server, you may be under attack and must ignore the message. Messages in PaySwarm are transient in nature, so nonces may only need to be stored for a limited period of time, simplifying certain implementations.

    What the PaySwarm Authority does when the browser goes to the registration URL is implementation specific. The only requirement is that it provides a permanent URL location for the public key and the URL for the owner of the public key being registered. Note that the URL for the owner must also be deferenceable, so it can provide information that indicates the owner does, in fact, own the public key. The response will be encrypted using the public key provided and can only be decrypted by the application’s private key. The next section will cover handling the registration response from the PaySwarm Authority.

    The Registration Response

    The last step in registering a Web Key is receiving the encrypted response from the PaySwarm Authority. Finalizing the registration of a Web Key consists of the following two steps:

    1. Receive the encrypted key registration response.
    2. Decode the key registration response.

    The steps above are demonstrated in the code snippet below:

        // step #1: receive the encrypted key registration response
      function(encryptedMessage, callback) {
        // step #2: decode the key registration response
        payswarm.decrypt(encryptedMessage, {privateKey: privateKeyPem}, callback);
      function(message, callback) {
        var publicKeyUrl = message.publicKey;
        var publicKeyOwnerUrl = message.owner;
        var financialAccountUrl = message.destination;

    When registering a public key, the application will ask you to go to a URL on your PaySwarm Authority. If your application is a console application, the final step of the registration process will provide an encrypted message that you can copy and paste into your application. If your application is a Web application, you can provide a callback URL that the encrypted message will be sent to after registration. That encrypted message is the input to the JavaScript code above. The PaySwarm library uses a function called decrypt() that will take the encrypted message and the application’s private key, which was generated earlier in the tutorial, and provide the unencrypted message that was sent to you by the PaySwarm Authority.

    The unencrypted message will contain, at a minimum, the URL for the public key and the identity URL for the owner of the public key. The Web Payments specification also requires that a financial account is associated with the public key, which is useful when you need to tell another application where to send funds. Once the application has this information, it should store it in a safe place, preferably password protected, along with the private key data.

    Browser-based Web Key Registration

    This tutorial covered the basics of creating a console-based Web Key client for the purposes of registering a public key. Typically a Web Key client will be built into software running on a public website that is accessed using a Web browser. An example of the experience provided by a purely browser-based Web Key workflow is shown below:

    Next: Listing an Asset for Sale

    This is the first article in a three part series on using the PaySwarm specifications which can be used to send and receive funds via the Web in a decentralized way. The next article in the series will explain how to create an Asset, such as an article, song, or movie, and list it for sale on the Web. The beauty of PaySwarm is that a customer on your site does not need to use the same payment processor as you, just as you don’t need to care which payment processor the customer uses. With PaySwarm, the money just flows to the appropriate place.

  9. Welcoming the new kid: Web Platform Docs

    Documenting the open Web and Web standards is a big job! As Mozillians, we’re well aware of this — documenting the open Web has been the mission of the Mozilla Developer Network for many years. Anything we can do to further the cause of a free and open Web is a worthwhile endeavor. With so many different groups involved in the design and development of new Web standards, it can be tricky to figure out the current right way to use them. That’s why we’re excited to be able to share this news with you.

    Introducing Web Platform Docs

    Mozilla, along with a group of major Web-related organizations convened by the World-Wide Web Consortium (W3C), has announced the start of Web Platform Docs (WPD), a new documentation site for Web standards documentation, which will be openly-licensed and community-maintained. The supporting organizations, known as the “stewards,” have contributed an initial batch of content to the site from their existing resources. The body of content is very much in an alpha state, as there is much work to be done to integrate, improve, and augment that content. to achieve the vision of WPD as a comprehensive and authoritative source for Web developer documentation. The stewards want to open the site for community participation as early as possible. With your help, WPD can achieve its vision being a comprehensive and authoritative source for Web developer documentation.

    Web Platform Docs has a similar goal to MDN: to help web developers improve their ability to make sites and applications using web standards. Mozilla welcomes this effort and joins with the other stewards in financially supporting the Web Platform Docs project, and in providing seed content for it. This new project is very much aligned with Mozilla’s mission to promote openness, innovation, and opportunity on the Web.

    What does this mean for MDN?

    MDN already provides a wealth of information for Web developers and for developers who use or contribute to Mozilla technology. That isn’t going to change. Some members of the MDN community, including both paid staff and volunteers, are actively involved with the Web Platform Docs project. Web Platform Docs incorporates some seed content from MDN, namely tutorial and guide content. Anyone is welcome to use MDN content under its Creative Commons Attribution-ShareAlike license (CC-BY-SA), whether on WPD or elsewhere.

    Licensing issues

    Licensing is where things get a little bit complicated. MDN and WPD use different contributor agreements and different licenses for reuse. By default, WPD contributors grant W3C the ability to relicense their original content under an open license (Creative Commons Attribution (CC-BY)). MDN content is licensed by the contributors under CC-BY-SA. The copyright belongs to the contributors, not to Mozilla, so we don’t have the right to change the license. Therefore, content that originates from MDN must be specially marked and attributed when it appears on WPD. If you create an account on WPD and create a new page, you’ll see that there is an option to indicate that the content you’re contributing came from MDN, and to provide the original URL on MDN. If you do copy MDN content (and we would be happy if you did so), we ask that you comply with the license requirements. There is also a way on WPD to mark sections of articles as coming from MDN, for cases where they get merged into CC-BY content.

    Get involved

    We encourage all Mozillians to visit the Web Platform Docs site, take a look, and get involved. By working with the other stewards to jointly build a complete, concise, and accurate suite of documentation of and for Web standards, we can help make the future of the World Wide Web brighter than ever!

  10. It's Opus, it rocks and now it's an audio codec standard!

    In a great victory for open standards, the Internet Engineering Task Force (IETF) has just standardized Opus as RFC 6716.

    Opus is the first state of the art, free audio codec to be standardized. We think this will help us achieve wider adoption than prior royalty-free codecs like Speex and Vorbis. This spells the beginning of the end for proprietary formats, and we are now working on doing the same thing for video.

    There was both skepticism and outright opposition to this work when it was first proposed in the IETF over 3 years ago. However, the results have shown that we can create a better codec through collaboration, rather than competition between patented technologies. Open standards benefit both open source organizations and proprietary companies, and we have been successful working together to create one. Opus is the result of a collaboration between many organizations, including the IETF, Mozilla, Microsoft (through Skype), Xiph.Org, Octasic, Broadcom, and Google.

    A highly flexible codec

    Unlike previous audio codecs, which have typically focused on a narrow set of applications (either voice or music, in a narrow range of bitrates, for either real-time or storage applications), Opus is highly flexible. It can adaptively switch among:

    • Bitrates from 6 kb/s to 512 kb/s
    • Voice and music
    • Mono and stereo
    • Narrowband (8 kHz) to Fullband (48 kHz)
    • Frame sizes from 2.5 ms to 60 ms

    Most importantly, it can adapt seamlessly within these operating points. Doing all of this with proprietary codecs would require at least six different codecs. Opus replaces all of them, with better quality.
    Illustration of the quality of different codecs
    The specification is available in RFC 6716, which includes the reference implementation. Up-to-date software releases are also available.

    Some audio standards define a normative encoder, which cannot be improved after it is standardized. Others allow for flexibility in the encoder, but release an intentionally hobbled reference implementation to force you to license their proprietary encoders. For Opus, we chose to allow flexibility for future encoders, but we also made the best one we knew how and released that as the reference implementation, so everyone could use it. We will continue to improve it, and keep releasing those improvements as open source.

    Use cases

    Opus is primarily designed for use in interactive applications on the Internet, including voice over IP (VoIP), teleconferencing, in-game chatting, and even live, distributed music performances. The IETF recently decided with “strong consensus” to adopt Opus as a mandatory-to-implement (MTI) codec for WebRTC, an upcoming standard for real-time communication on the web. Despite the focus on low latency, Opus also excels at streaming and storage applications, beating existing high-delay codecs like Vorbis and HE-AAC. It’s great for internet radio, adaptive streaming, game sound effects, and much more.

    Although Opus is just out, it is already supported in many applications, such as Firefox, GStreamer, FFMpeg, foobar2000, K-Lite Codec Pack, and lavfilters, with upcoming support in VLC, rockbox and Mumble.

    For more information, visit the Opus website.