Mozilla

Articles

Sort by:

View:

  1. Pseudo elements, promise inspection, raw headers, and much more – Firefox Developer Edition 36

    Firefox 36 was just uplifted to the Developer Edition channel, so let’s take a look at the most important Developer Tools changes in this release. We will also cover some changes from Firefox 35 since it was released shortly before the initial Developer Edition announcement. There is a lot to talk about, so let’s get right to it.

    Inspector

    You can now inspect ::before and ::after pseudo elements.  They behave like other elements in the markup tree and inspector sidebars. (35, development notes)

    before-after-inspection

    There is a new “Show DOM Properties” context menu item in the markup tree. (35, development notes, documentation about this feature on MDN)

    show-dom-properties

    The box model highlighter now works on remote targets, so there is a full-featured highlighter even when working with pages on Firefox for Android or apps on Firefox OS. (36, development notes, and technical documentation for building your own custom highlighter)

    Shadow DOM content is now visible in the markup tree; note that you will need to set dom.webcomponents.enabled to true to test this feature (36, development notes, and see bug 1053898 for more work in this space).

    We borrowed a useful feature from Firebug and are now allowing more paste options when right clicking a node in the markup tree.  (36, development notes, documentation about this feature on MDN)

    paste-options

    Some more changes to the Inspector included in Firefox 35 & 36:

    • Deleting a node now selects the previous sibling instead of the parent (36, development notes)
    • There is higher contrast for the currently hovered node in the markup view (36, development notes)
    • Hover over a CSS selector in the computed view to highlight all the nodes that match that selector on the page. (35, development notes)

    Debugger / Console

    DOM Promises are now inspectable. You can inspect the promises state and value at any moment. (36, development notes)

    promise-inspection

    The debugger now recognizes and works with eval’ed sources. (36, development notes)

    Eval’ed sources support the //# sourceURL=path.js syntax, which will make them appear as a normal file in the debugger and in stack traces reported by Error.prototype.stack. See this post: http://fitzgeraldnick.com/weblog/59/ for much more information. (36, development notes,  more development notes)

    Console statements now include the column number they originated from. (36, development notes)

    WebIDE

    You are now able to connect to Firefox for Android from the WebIDE.  See documentation for debugging firefox for android on MDN.  (36, development notes).

    We also made some changes to improve the user experience in the WebIDE:

    • Bring up developer tools by default when I select a runtime app / tab (35, development notes)
    • Re-select project on connect if last project is runtime app (35, development notes)
    • Auto-select and connect to last used runtime if available (35, development notes)
    • Font resizing (36, development notes)
    • You can now adding a hosted app project by entering the base URL (eg: “http://example.com”) instead of requiring the full path to the manifest.webapp file (35, development notes)

    Network Monitor

    We added a plain request/response headers view to make it easier to view and copy the raw headers on a request. (35, development notes)

    net-headers-raw

    Here is a list of all the bugs resolved for Firefox 35 and all the bugs resolved for Firefox 36.

    Do you have feedback, bug reports, feature requests, or questions? As always, you can comment here, add/vote for ideas on UserVoice or get in touch with the team at @FirefoxDevTools on Twitter.

  2. Mozilla and Web Components: Update

    Editor’s note: Mozilla has a long history of participating in standards development. The post below shows a real-time slice of how standards are debated and adopted. The goal is to update developers who are most affected by implementation decisions we make in Firefox. We are particularly interested in getting feedback from JavaScript library and framework developers.

    Mozilla has been working on Web Components — a technology encompassing HTML imports, custom elements, and shadow DOM — for a while now and testing this approach in Gaia, the frontend of Firefox OS. Unfortunately, our feedback into the standards process has not always resulted in the changes required for us to ship Web Components. Therefore we decided to reevaluate our stance with members of the developer community.

    We came up with the following tentative plan for shipping Web Components in Firefox and we would really appreciate input from the developer community as we move this forward. Web Components changes a core aspect of the Web Platform and getting it right is important. We believe the way to do that is by having the change be driven by the hard learned lessons from JavaScript library developers.

    • Mozilla will not ship an implementation of HTML Imports. We expect that once JavaScript modules — a feature derived from JavaScript libraries written by the developer community — is shipped, the way we look at this problem will have changed. We have also learned from Gaia and others, that lack of HTML Imports is not a problem as the functionality can easily be provided for with a polyfill if desired.
    • Mozilla will ship an implementation of custom elements. Exposing the lifecycle is a very important aspect for the creation of components. We will work with the standards community to use Symbol-named properties for the callbacks to prevent name collisions. We will also ensure the strategy surrounding subclassing is sound with the latest work on that front in JavaScript and that the callbacks are sufficiently capable to describe the lifecycle of elements or can at least be changed in that direction.
    • Mozilla will ship an implementation of shadow DOM. We think work needs to be done to decouple style isolation from event retargeting to make event delegation possible in frameworks and we would like to ensure distribution is sufficiently extensible beyond Selectors. E.g Gaia would like to see this ability.

    Our next steps will be working with the standards community to make these changes happen, making sure there is sufficient test coverage in web-platform-tests, and making sure the specifications become detailed enough to implement from.

    So please let us know what you think here in the comments or directly on the public-webapps standards list!

  3. Introducing the JavaScript Internationalization API

    Firefox 29 issued half a year ago, so this post is long overdue. Nevertheless I wanted to pause for a second to discuss the Internationalization API first shipped on desktop in that release (and passing all tests!). Norbert Lindenberg wrote most of the implementation, and I reviewed it and now maintain it. (Work by Makoto Kato should bring this to Android soon; b2g may take longer due to some b2g-specific hurdles. Stay tuned.)

    What’s internationalization?

    Internationalization (i18n for short — i, eighteen characters, n) is the process of writing applications in a way that allows them to be easily adapted for audiences from varied places, using varied languages. It’s easy to get this wrong by inadvertently assuming one’s users come from one place and speak one language, especially if you don’t even know you’ve made an assumption.

    function formatDate(d)
    {
      // Everyone uses month/date/year...right?
      var month = d.getMonth() + 1;
      var date = d.getDate();
      var year = d.getFullYear();
      return month + "/" + date + "/" + year;
    }
     
    function formatMoney(amount)
    {
      // All money is dollars with two fractional digits...right?
      return "$" + amount.toFixed(2);
    }
     
    function sortNames(names)
    {
      function sortAlphabetically(a, b)
      {
        var left = a.toLowerCase(), right = b.toLowerCase();
        if (left > right)
          return 1;
        if (left === right)
          return 0;
        return -1;
      }
     
      // Names always sort alphabetically...right?
      names.sort(sortAlphabetically);
    }

    JavaScript’s historical i18n support is poor

    i18n-aware formatting in traditional JS uses the various toLocaleString() methods. The resulting strings contained whatever details the implementation chose to provide: no way to pick and choose (did you need a weekday in that formatted date? is the year irrelevant?). Even if the proper details were included, the format might be wrong e.g. decimal when percentage was desired. And you couldn’t choose a locale.

    As for sorting, JS provided almost no useful locale-sensitive text-comparison (collation) functions. localeCompare() existed but with a very awkward interface unsuited for use with sort. And it too didn’t permit choosing a locale or specific sort order.

    These limitations are bad enough that — this surprised me greatly when I learned it! — serious web applications that need i18n capabilities (most commonly, financial sites displaying currencies) will box up the data, send it to a server, have the server perform the operation, and send it back to the client. Server roundtrips just to format amounts of money. Yeesh.

    A new JS Internationalization API

    The new ECMAScript Internationalization API greatly improves JavaScript’s i18n capabilities. It provides all the flourishes one could want for formatting dates and numbers and sorting text. The locale is selectable, with fallback if the requested locale is unsupported. Formatting requests can specify the particular components to include. Custom formats for percentages, significant digits, and currencies are supported. Numerous collation options are exposed for use in sorting text. And if you care about performance, the up-front work to select a locale and process options can now be done once, instead of once every time a locale-dependent operation is performed.

    That said, the API is not a panacea. The API is “best effort” only. Precise outputs are almost always deliberately unspecified. An implementation could legally support only the oj locale, or it could ignore (almost all) provided formatting options. Most implementations will have high-quality support for many locales, but it’s not guaranteed (particularly on resource-constrained systems such as mobile).

    Under the hood, Firefox’s implementation depends upon the International Components for Unicode library (ICU), which in turn depends upon the Unicode Common Locale Data Repository (CLDR) locale data set. Our implementation is self-hosted: most of the implementation atop ICU is written in JavaScript itself. We hit a few bumps along the way (we haven’t self-hosted anything this large before), but nothing major.

    The Intl interface

    The i18n API lives on the global Intl object. Intl contains three constructors: Intl.Collator, Intl.DateTimeFormat, and Intl.NumberFormat. Each constructor creates an object exposing the relevant operation, efficiently caching locale and options for the operation. Creating such an object follows this pattern:

    var ctor = "Collator"; // or the others
    var instance = new Intl[ctor](locales, options);

    locales is a string specifying a single language tag or an arraylike object containing multiple language tags. Language tags are strings like en (English generally), de-AT (German as used in Austria), or zh-Hant-TW (Chinese as used in Taiwan, using the traditional Chinese script). Language tags can also include a “Unicode extension”, of the form -u-key1-value1-key2-value2..., where each key is an “extension key”. The various constructors interpret these specially.

    options is an object whose properties (or their absence, by evaluating to undefined) determine how the formatter or collator behaves. Its exact interpretation is determined by the individual constructor.

    Given locale information and options, the implementation will try to produce the closest behavior it can to the “ideal” behavior. Firefox supports 400+ locales for collation and 600+ locales for date/time and number formatting, so it’s very likely (but not guaranteed) the locales you might care about are supported.

    Intl generally provides no guarantee of particular behavior. If the requested locale is unsupported, Intl allows best-effort behavior. Even if the locale is supported, behavior is not rigidly specified. Never assume that a particular set of options corresponds to a particular format. The phrasing of the overall format (encompassing all requested components) might vary across browsers, or even across browser versions. Individual components’ formats are unspecified: a short-format weekday might be “S”, “Sa”, or “Sat”. The Intl API isn’t intended to expose exactly specified behavior.

    Date/time formatting

    Options

    The primary options properties for date/time formatting are as follows:

    weekday, era
    "narrow", "short", or "long". (era refers to typically longer-than-year divisions in a calendar system: BC/AD, the current Japanese emperor’s reign, or others.)
    month
    "2-digit", "numeric", "narrow", "short", or "long"
    year
    day
    hour, minute, second
    "2-digit" or "numeric"
    timeZoneName
    "short" or "long"
    timeZone
    Case-insensitive "UTC" will format with respect to UTC. Values like "CEST" and "America/New_York" don’t have to be supported, and they don’t currently work in Firefox.

    The values don’t map to particular formats: remember, the Intl API almost never specifies exact behavior. But the intent is that "narrow", "short", and "long" produce output of corresponding size — “S” or “Sa”, “Sat”, and “Saturday”, for example. (Output may be ambiguous: Saturday and Sunday both could produce “S”.) "2-digit" and "numeric" map to two-digit number strings or full-length numeric strings: “70” and “1970”, for example.

    The final used options are largely the requested options. However, if you don’t specifically request any weekday/year/month/day/hour/minute/second, then year/month/day will be added to your provided options.

    Beyond these basic options are a few special options:

    hour12
    Specifies whether hours will be in 12-hour or 24-hour format. The default is typically locale-dependent. (Details such as whether midnight is zero-based or twelve-based and whether leading zeroes are present are also locale-dependent.)

    There are also two special properties, localeMatcher (taking either "lookup" or "best fit") and formatMatcher (taking either "basic" or "best fit"), each defaulting to "best fit". These affect how the right locale and format are selected. The use cases for these are somewhat esoteric, so you should probably ignore them.

    Locale-centric options

    DateTimeFormat also allows formatting using customized calendaring and numbering systems. These details are effectively part of the locale, so they’re specified in the Unicode extension in the language tag.

    For example, Thai as spoken in Thailand has the language tag th-TH. Recall that a Unicode extension has the format -u-key1-value1-key2-value2.... The calendaring system key is ca, and the numbering system key is nu. The Thai numbering system has the value thai, and the Chinese calendaring system has the value chinese. Thus to format dates in this overall manner, we tack a Unicode extension containing both these key/value pairs onto the end of the language tag: th-TH-u-ca-chinese-nu-thai.

    For more information on the various calendaring and numbering systems, see the full DateTimeFormat documentation.

    Examples

    After creating a DateTimeFormat object, the next step is to use it to format dates via the handy format() function. Conveniently, this function is a bound function: you don’t have to call it on the DateTimeFormat directly. Then provide it a timestamp or Date object.

    Putting it all together, here are some examples of how to create DateTimeFormat options for particular uses, with current behavior in Firefox.

    var msPerDay = 24 * 60 * 60 * 1000;
     
    // July 17, 2014 00:00:00 UTC.
    var july172014 = new Date(msPerDay * (44 * 365 + 11 + 197));

    Let’s format a date for English as used in the United States. Let’s include two-digit month/day/year, plus two-digit hours/minutes, and a short time zone to clarify that time. (The result would obviously be different in another time zone.)

    var options =
      { year: "2-digit", month: "2-digit", day: "2-digit",
        hour: "2-digit", minute: "2-digit",
        timeZoneName: "short" };
    var americanDateTime =
      new Intl.DateTimeFormat("en-US", options).format;
     
    print(americanDateTime(july172014)); // 07/16/14, 5:00 PM PDT

    Or let’s do something similar for Portuguese — ideally as used in Brazil, but in a pinch Portugal works. Let’s go for a little longer format, with full year and spelled-out month, but make it UTC for portability.

    var options =
      { year: "numeric", month: "long", day: "numeric",
        hour: "2-digit", minute: "2-digit",
        timeZoneName: "short", timeZone: "UTC" };
    var portugueseTime =
      new Intl.DateTimeFormat(["pt-BR", "pt-PT"], options);
     
    // 17 de julho de 2014 00:00 GMT
    print(portugueseTime.format(july172014));

    How about a compact, UTC-formatted weekly Swiss train schedule? We’ll try the official languages from most to least popular to choose the one that’s most likely to be readable.

    var swissLocales = ["de-CH", "fr-CH", "it-CH", "rm-CH"];
    var options =
      { weekday: "short",
        hour: "numeric", minute: "numeric",
        timeZone: "UTC", timeZoneName: "short" };
    var swissTime =
      new Intl.DateTimeFormat(swissLocales, options).format;
     
    print(swissTime(july172014)); // Do. 00:00 GMT

    Or let’s try a date in descriptive text by a painting in a Japanese museum, using the Japanese calendar with year and era:

    var jpYearEra =
      new Intl.DateTimeFormat("ja-JP-u-ca-japanese",
                              { year: "numeric", era: "long" });
     
    print(jpYearEra.format(july172014)); // 平成26年

    And for something completely different, a longer date for use in Thai as used in Thailand — but using the Thai numbering system and Chinese calendar. (Quality implementations such as Firefox’s would treat plain th-TH as th-TH-u-ca-buddhist-nu-latn, imputing Thailand’s typical Buddhist calendar system and Latin 0-9 numerals.)

    var options =
      { year: "numeric", month: "long", day: "numeric" };
    var thaiDate =
      new Intl.DateTimeFormat("th-TH-u-nu-thai-ca-chinese", options);
     
    print(thaiDate.format(july172014)); // ๒๐ 6 ๓๑

    Calendar and numbering system bits aside, it’s relatively simple. Just pick your components and their lengths.

    Number formatting

    Options

    The primary options properties for number formatting are as follows:

    style
    "currency", "percent", or "decimal" (the default) to format a value of that kind.
    currency
    A three-letter currency code, e.g. USD or CHF. Required if style is "currency", otherwise meaningless.
    currencyDisplay
    "code", "symbol", or "name", defaulting to "symbol". "code" will use the three-letter currency code in the formatted string. "symbol" will use a currency symbol such as $ or £. "name" typically uses some sort of spelled-out version of the currency. (Firefox currently only supports "symbol", but this will be fixed soon.)
    minimumIntegerDigits
    An integer from 1 to 21 (inclusive), defaulting to 1. The resulting string is front-padded with zeroes until its integer component contains at least this many digits. (For example, if this value were 2, formatting 3 might produce “03”.)
    minimumFractionDigits, maximumFractionDigits
    Integers from 0 to 20 (inclusive). The resulting string will have at least minimumFractionDigits, and no more than maximumFractionDigits, fractional digits. The default minimum is currency-dependent (usually 2, rarely 0 or 3) if style is "currency", otherwise 0. The default maximum is 0 for percents, 3 for decimals, and currency-dependent for currencies.
    minimumSignificantDigits, maximumSignificantDigits
    Integers from 1 to 21 (inclusive). If present, these override the integer/fraction digit control above to determine the minimum/maximum significant figures in the formatted number string, as determined in concert with the number of decimal places required to accurately specify the number. (Note that in a multiple of 10 the significant digits may be ambiguous, as in “100” with its one, two, or three significant digits.)
    useGrouping
    Boolean (defaulting to true) determining whether the formatted string will contain grouping separators (e.g. “,” as English thousands separator).

    NumberFormat also recognizes the esoteric, mostly ignorable localeMatcher property.

    Locale-centric options

    Just as DateTimeFormat supported custom numbering systems in the Unicode extension using the nu key, so too does NumberFormat. For example, the language tag for Chinese as used in China is zh-CN. The value for the Han decimal numbering system is hanidec. To format numbers for these systems, we tack a Unicode extension onto the language tag: zh-CN-u-nu-hanidec.

    For complete information on specifying the various numbering systems, see the full NumberFormat documentation.

    Examples

    NumberFormat objects have a format function property just as DateTimeFormat objects do. And as there, the format function is a bound function that may be used in isolation from the NumberFormat.

    Here are some examples of how to create NumberFormat options for particular uses, with Firefox’s behavior. First let’s format some money for use in Chinese as used in China, specifically using Han decimal numbers (instead of much more common Latin numbers). Select the "currency" style, then use the code for Chinese renminbi (yuan), grouping by default, with the usual number of fractional digits.

    var hanDecimalRMBInChina =
      new Intl.NumberFormat("zh-CN-u-nu-hanidec",
                            { style: "currency", currency: "CNY" });
     
    print(hanDecimalRMBInChina.format(1314.25)); // ¥ 一,三一四.二五

    Or let’s format a United States-style gas price, with its peculiar thousandths-place 9, for use in English as used in the United States.

    var gasPrice =
      new Intl.NumberFormat("en-US",
                            { style: "currency", currency: "USD",
                              minimumFractionDigits: 3 });
     
    print(gasPrice.format(5.259)); // $5.259

    Or let’s try a percentage in Arabic, meant for use in Egypt. Make sure the percentage has at least two fractional digits. (Note that this and all the other RTL examples may appear with different ordering in RTL context, e.g. ٤٣٫٨٠٪ instead of ٤٣٫٨٠٪.)

    var arabicPercent =
      new Intl.NumberFormat("ar-EG",
                            { style: "percent",
                              minimumFractionDigits: 2 }).format;
     
    print(arabicPercent(0.438)); // ٤٣٫٨٠٪

    Or suppose we’re formatting for Persian as used in Afghanistan, and we want at least two integer digits and no more than two fractional digits.

    var persianDecimal =
      new Intl.NumberFormat("fa-AF",
                            { minimumIntegerDigits: 2,
                              maximumFractionDigits: 2 });
     
    print(persianDecimal.format(3.1416)); // ۰۳٫۱۴

    Finally, let’s format an amount of Bahraini dinars, for Arabic as used in Bahrain. Unusually compared to most currencies, Bahraini dinars divide into thousandths (fils), so our number will have three places. (Again note that apparent visual ordering should be taken with a grain of salt.)

    var bahrainiDinars =
      new Intl.NumberFormat("ar-BH",
                            { style: "currency", currency: "BHD" });
     
    print(bahrainiDinars.format(3.17)); // د.ب.‏ ٣٫١٧٠

    Collation

    Options

    The primary options properties for collation are as follows:

    usage
    "sort" or "search" (defaulting to "sort"), specifying the intended use of this Collator. (A search collator might want to consider more strings equivalent than a sort collator would.)
    sensitivity
    "base", "accent", "case", or "variant". This affects how sensitive the collator is to characters that have the same “base letter” but have different accents/diacritics and/or case. (Base letters are locale-dependent: “a” and “ä” have the same base letter in German but are different letters in Swedish.) "base" sensitivity considers only the base letter, ignoring modifications (so for German “a”, “A”, and “ä” are considered the same). "accent" considers the base letter and accents but ignores case (so for German “a” and “A” are the same, but “ä” differs from both). "case" considers the base letter and case but ignores accents (so for German “a” and “ä” are the same, but “A” differs from both). Finally, "variant" considers base letter, accents, and case (so for German “a”, “ä, “ä” and “A” all differ). If usage is "sort", the default is "variant"; otherwise it’s locale-dependent.
    numeric
    Boolean (defaulting to false) determining whether complete numbers embedded in strings are considered when sorting. For example, numeric sorting might produce "F-4 Phantom II", "F-14 Tomcat", "F-35 Lightning II"; non-numeric sorting might produce "F-14 Tomcat", "F-35 Lightning II", "F-4 Phantom II".
    caseFirst
    "upper", "lower", or "false" (the default). Determines how case is considered when sorting: "upper" places uppercase letters first ("B", "a", "c"), "lower" places lowercase first ("a", "c", "B"), and "false" ignores case entirely ("a", "B", "c"). (Note: Firefox currently ignores this property.)
    ignorePunctuation
    Boolean (defaulting to false) determining whether to ignore embedded punctuation when performing the comparison (for example, so that "biweekly" and "bi-weekly" compare equivalent).

    And there’s that localeMatcher property that you can probably ignore.

    Locale-centric options

    The main Collator option specified as part of the locale’s Unicode extension is co, selecting the kind of sorting to perform: phone book (phonebk), dictionary (dict), and many others.

    Additionally, the keys kn and kf may, optionally, duplicate the numeric and caseFirst properties of the options object. But they’re not guaranteed to be supported in the language tag, and options is much clearer than language tag components. So it’s best to only adjust these options through options.

    These key-value pairs are included in the Unicode extension the same way they’ve been included for DateTimeFormat and NumberFormat; refer to those sections for how to specify these in a language tag.

    Examples

    Collator objects have a compare function property. This function accepts two arguments x and y and returns a number less than zero if x compares less than y, 0 if x compares equal to y, or a number greater than zero if x compares greater than y. As with the format functions, compare is a bound function that may be extracted for standalone use.

    Let’s try sorting a few German surnames, for use in German as used in Germany. There are actually two different sort orders in German, phonebook and dictionary. Phonebook sort emphasizes sound, and it’s as if “ä”, “ö”, and so on were expanded to “ae”, “oe”, and so on prior to sorting.

    var names =
      ["Hochberg", "Hönigswald", "Holzman"];
     
    var germanPhonebook = new Intl.Collator("de-DE-u-co-phonebk");
     
    // as if sorting ["Hochberg", "Hoenigswald", "Holzman"]:
    //   Hochberg, Hönigswald, Holzman
    print(names.sort(germanPhonebook.compare).join(", "));

    Some German words conjugate with extra umlauts, so in dictionaries it’s sensible to order ignoring umlauts (except when ordering words differing only by umlauts: schon before schön).

    var germanDictionary = new Intl.Collator("de-DE-u-co-dict");
     
    // as if sorting ["Hochberg", "Honigswald", "Holzman"]:
    //   Hochberg, Holzman, Hönigswald
    print(names.sort(germanDictionary.compare).join(", "));

    Or let’s sort a list Firefox versions with various typos (different capitalizations, random accents and diacritical marks, extra hyphenation), in English as used in the United States. We want to sort respecting version number, so do a numeric sort so that numbers in the strings are compared, not considered character-by-character.

    var firefoxen =
      ["FireFøx 3.6",
       "Fire-fox 1.0",
       "Firefox 29",
       "FÍrefox 3.5",
       "Fírefox 18"];
     
    var usVersion =
      new Intl.Collator("en-US",
                        { sensitivity: "base",
                          numeric: true,
                          ignorePunctuation: true });
     
    // Fire-fox 1.0, FÍrefox 3.5, FireFøx 3.6, Fírefox 18, Firefox 29
    print(firefoxen.sort(usVersion.compare).join(", "));

    Last, let’s do some locale-aware string searching that ignores case and accents, again in English as used in the United States.

    // Comparisons work with both composed and decomposed forms.
    var decoratedBrowsers =
      [
       "A\u0362maya",  // A͢maya
       "CH\u035Brôme", // CH͛rôme
       "FirefÓx",
       "sAfàri",
       "o\u0323pERA",  // ọpERA
       "I\u0352E",     // I͒E
      ];
     
    var fuzzySearch =
      new Intl.Collator("en-US",
                        { usage: "search", sensitivity: "base" });
     
    function findBrowser(browser)
    {
      function cmp(other)
      {
        return fuzzySearch.compare(browser, other) === 0;
      }
      return cmp;
    }
     
    print(decoratedBrowsers.findIndex(findBrowser("Firêfox"))); // 2
    print(decoratedBrowsers.findIndex(findBrowser("Safåri")));  // 3
    print(decoratedBrowsers.findIndex(findBrowser("Ãmaya")));   // 0
    print(decoratedBrowsers.findIndex(findBrowser("Øpera")));   // 4
    print(decoratedBrowsers.findIndex(findBrowser("Chromè")));  // 1
    print(decoratedBrowsers.findIndex(findBrowser("IË")));      // 5

    Odds and ends

    It may be useful to determine whether support for some operation is provided for particular locales, or to determine whether a locale is supported. Intl provides supportedLocales() functions on each constructor, and resolvedOptions() functions on each prototype, to expose this information.

    var navajoLocales =
      Intl.Collator.supportedLocalesOf(["nv"], { usage: "sort" });
    print(navajoLocales.length > 0
          ? "Navajo collation supported"
          : "Navajo collation not supported");
     
    var germanFakeRegion =
      new Intl.DateTimeFormat("de-XX", { timeZone: "UTC" });
    var usedOptions = germanFakeRegion.resolvedOptions();
    print(usedOptions.locale);   // de
    print(usedOptions.timeZone); // UTC

    Legacy behavior

    The ES5 toLocaleString-style and localeCompare functions previously had no particular semantics, accepted no particular options, and were largely useless. So the i18n API reformulates them in terms of Intl operations. Each method now accepts additional trailing locales and options arguments, interpreted just as the Intl constructors would do. (Except that for toLocaleTimeString and toLocaleDateString, different default components are used if options aren’t provided.)

    For brief use where precise behavior doesn’t matter, the old methods are fine to use. But if you need more control or are formatting or comparing many times, it’s best to use the Intl primitives directly.

    Conclusion

    Internationalization is a fascinating topic whose complexity is bounded only by the varied nature of human communication. The Internationalization API addresses a small but quite useful portion of that complexity, making it easier to produce locale-sensitive web applications. Go use it!

    (And a special thanks to Norbert Lindenberg, Anas El Husseini, Simon Montagu, Gary Kwong, Shu-yu Guo, Ehsan Akhgari, the people of #mozilla.de, and anyone I may have forgotten [sorry!] who provided feedback on this article or assisted me in producing and critiquing the examples. The English and German examples were the limit of my knowledge, and I’d have been completely lost on the other examples without their assistance. Blame all remaining errors on me. Thanks again!)

  4. QuaggaJS – Building a barcode-scanner for the Web

    Have your ever tried to type in a voucher code on your mobile phone or simply enter the number of your membership card into a web form?

    These are just two examples of time-consuming and error-prone tasks which can be avoided by taking advantage of printed barcodes. This is nothing new; many solutions exist for reading barcodes with a regular camera, like zxing, but they require a native platform such as Android or iOS. I wanted a solution which works on the Web, without plugins of any sort, and which even Firefox OS could leverage.

    My general interest in computer vision and web technologies fueled my curiosity whether something like this would be possible. Not just a simple scanner, but a scanner equipped with localization mechanisms to find a barcode in real-time.

    The result is a project called QuaggaJS, which is hosted on GitHub. Take a look at the demo pages to get an idea of what this project is all about.

    Reading Barcodes - teaser_500

    How does it work?

    Simply speaking, the pipeline can be divided into the following three steps:

    1. Reading the image and converting it into a binary representation
    2. Determining the location and rotation of the barcode
    3. Decoding the barcode based on the type EAN, Code128

    The first step requires the source to be either a webcam stream or an image file, which is then converted into gray-scale and stored in a 1D array. After that, the image data is passed along to the locator, which is responsible for finding a barcode-like pattern in the image. And finally, if a pattern is found, the decoder tries to read the barcode and return the result. You can read more about these steps in how barcode localization works in QuaggaJS.

    The real-time challenge

    One of the main challenges was to get the pipeline up to speed and fast enough to be considered as a real-time application. When talking about real-time in image-processing applications, I consider 25 frames per second (FPS) the lower boundary. This means that the entire pipeline has to be completed in at least 40ms.

    The core parts of QuaggaJS are made up of computer vision algorithms which tend to be quite heavy on array access. As I already mentioned, the input image is stored in a 1D array. This is not a regular JavaScript Array, but a Typed Array. Since the image has already been converted to gray-scale in the first step, the range of each pixel’s value is set between 0 and 255. This is why Uint8Arrays are used for all image-related buffers.

    Memory efficiency

    One of the key ways to achieve real-time speed for interactive applications is to create memory efficient code which avoids large GC (garbage collection) pauses. That is why I removed most of the memory allocation calls by simply reusing initially created buffers. However this is only useful for buffers when you know the size up front and when the size does not change over time, as with images.

    Profiling

    When you are curious why a certain part of your application runs too slow, a CPU profile may come in handy.

    Firefox includes some wonderful tools to create CPU profiles for the running JavaScript code. During development, this proved to be viable for pinpointing performance bottlenecks and finding functions which caused the most load on the CPU. The following profile was recorded during a session with a webcam on an Intel Core i7-4600U. (Config: video 640×480, half-sampling barcode-localization)

    alt=

    The profile is zoomed in and shows four subsequent frames. On average, one frame in the pipeline is processed in roughly 20 ms. This can be considered fast enough, even when running on machines having a less powerful CPU, like mobile phones or tablets.

    I marked each step of the pipeline in a different color; green is the first, blue the second and red the third one. The drill-down shows that the localization step consumes most of the time (55.6 %), followed by reading the input stream (28.4 %) and finally by decoding (3.7 %). It is also worth noting that skeletonize is one of the most expensive functions in terms of CPU usage. Because of that, I re-implemented the entire skeletonizing algorithm in asm.js by hand to see whether it could run even faster.

    asm.js

    Asm.js is a highly optimizable subset of JavaScript that can execute at close to native speed. It promises a lot of performance gains when used for compute-intensive tasks (take a look at MASSIVE), like most computer vision algorithms. That’s why I ported the entire skeletonizer module to asm.js. This was a very tedious task, because you are actually not supposed to write asm.js code by hand. Usually asm.js code is generated when it is cross-compiled from C/C++ or other LLVM languages using emscripten. But I did it anyway, just to prove a point.

    The first thing that needs to be sorted out is how to get the image-data into the asm.js module, alongside with parameters like the size of the image. The module is designed to fit right into the existing implementation and therefore incorporates some constraints, like a square image size. However, the skeletonizer is only applied on chunks of the original image, which are all square by definition. Not only is the input-data relevant, but also three temporary buffers are needed during processing (eroded, temp, skeleton).

    In order to cover that, an initial buffer is created, big enough to hold all four images at once. The buffer is shared between the caller and the module. Since we are working with a single buffer, we need to keep a reference to the position of each image. It’s like playing with pointers in C.

    function skeletonize() {
      var subImagePtr = 0,
        erodedImagePtr = 0,
        tempImagePtr = 0,
        skelImagePtr = 0;
     
      erodedImagePtr = imul(size, size) | 0;
      tempImagePtr = (erodedImagePtr + erodedImagePtr) | 0;
      skelImagePtr = (tempImagePtr + erodedImagePtr) | 0;
      // ...
    }

    To get a better understanding of the idea behind the structure of the buffer, compare it with the following illustration:

    Buffer in Skeletonizer

    The buffer in green represents the allocated memory, which is passed in the asm.js module upon creation. This buffer is then divided into four blue blocks, of which each contains the data for the respective image. In order to get a reference to the correct data block, the variables (ending with Ptr) are pointing to that exact position.

    Now that we have set up the buffer, it is time to take a look at the erode function, which is part of the skeletonizer written in vanilla JavaScript:

    function erode(inImageWrapper, outImageWrapper) {
      var v,
        u,
        inImageData = inImageWrapper.data,
        outImageData = outImageWrapper.data,
        height = inImageWrapper.size.y,
        width = inImageWrapper.size.x,
        sum,
        yStart1,
        yStart2,
        xStart1,
        xStart2;
     
      for ( v = 1; v < height - 1; v++) {
        for ( u = 1; u < width - 1; u++) {
          yStart1 = v - 1;
          yStart2 = v + 1;
          xStart1 = u - 1;
          xStart2 = u + 1;
          sum = inImageData[yStart1 * width + xStart1] +
            inImageData[yStart1 * width + xStart2] +
            inImageData[v * width + u] +
            inImageData[yStart2 * width + xStart1] +
            inImageData[yStart2 * width + xStart2];
     
          outImageData[v * width + u] = sum === 5 ? 1 : 0;
        }
      }
    }

    This code was then modified to conform to the asm.js specification.

    "use asm";
     
    // initially creating a view on the buffer (passed in)
    var images = new stdlib.Uint8Array(buffer),
      size = foreign.size | 0;
     
    function erode(inImagePtr, outImagePtr) {
      inImagePtr = inImagePtr | 0;
      outImagePtr = outImagePtr | 0;
     
      var v = 0,
        u = 0,
        sum = 0,
        yStart1 = 0,
        yStart2 = 0,
        xStart1 = 0,
        xStart2 = 0,
        offset = 0;
     
      for ( v = 1; (v | 0) < ((size - 1) | 0); v = (v + 1) | 0) {
        offset = (offset + size) | 0;
        for ( u = 1; (u | 0) < ((size - 1) | 0); u = (u + 1) | 0) {
          yStart1 = (offset - size) | 0;
          yStart2 = (offset + size) | 0;
          xStart1 = (u - 1) | 0;
          xStart2 = (u + 1) | 0;
          sum = ((images[(inImagePtr + yStart1 + xStart1) | 0] | 0) +
            (images[(inImagePtr + yStart1 + xStart2) | 0] | 0) +
            (images[(inImagePtr + offset + u) | 0] | 0) +
            (images[(inImagePtr + yStart2 + xStart1) | 0] | 0) +
            (images[(inImagePtr + yStart2 + xStart2) | 0] | 0)) | 0;
          if ((sum | 0) == (5 | 0)) {
            images[(outImagePtr + offset + u) | 0] = 1;
          } else {
            images[(outImagePtr + offset + u) | 0] = 0;
          }
        }
      }
      return;
    }

    Although the basic code structure did not significantly change, the devil is in the detail. Instead of passing in the references to JavaScript objects, the respective indexes of the input and output images, pointing to the buffer, are used. Another noticeable difference is the repeated casting of values to integers with the | 0 notion, which is necessary for secure array access. There is also an additional variable offset defined, which is used as a counter to keep track of the absolute position in the buffer. This approach replaces the multiplication used for determining the current position. In general, asm.js does not allow multiplications of integers except when using the imul operator.

    Finally, the use of the tenary operator ( ? : ) is forbidden in asm.js which has simply been replaced by a regular if.. else condition.

    Performance comparison

    And now it is time to answer the more important question: How much faster is the asm.js implementation compared to regular JavaScript? Let’s take a look at the performance profiles, of which the first one represents the normal JavaScript version and the second asm.js.

    Image Stream Profile

    image-stream-profile-asm

    Surprisingly, the difference between the two implementations is not as big as you might expect (~10%). Apparently, the initial JavaScript code was already written clean enough, so that the JIT compiler could already take full advantage of that. This assumption can only be proven wrong or right if someone re-implements the algorithm in C/C++ and cross-compiles it to asm.js using emscripten. I’m almost sure that the result would differ from my naïve port and produce much more optimized code.

    getUserMedia

    Besides performance, there are many other parts that must fit together in order to get the best experience. One of those parts is the portal to the user’s world, the camera. As we all know, getUserMedia provides an API to gain access to the device’s camera. Here, the difficulty lies within the differences among all major browser vendors, where the constraints, resolutions and events are handled differently.

    Front/back-facing

    If you are targeting devices other than regular laptops or computers, the chances are high that these devices offer more than one camera. Nowadays almost every tablet or smartphone has a back- and front-facing camera. When using Firefox, selecting the camera programmatically is not possible. Every time the user confirms access to the camera, he or she has to select the desired one. This is handled differently in Chrome, where MediaStreamTrack.getSources exposes the available sources which can then be filtered. You can find the defined sources in the W3C draft.

    The following snippet demonstrates how to get preferred access to the user’s back-facing camera:

    MediaStreamTrack.getSources(function(sourceInfos) {
      var envSource = sourceInfos.filter(function(sourceInfo) {
        return sourceInfo.kind == "video"
            && sourceInfo.facing == "environment";
      }).reduce(function(a, source) {
        return source;
      }, null);
      var constraints = {
        audio : false,
        video : {
          optional : [{
            sourceId : envSource ? envSource.id : null
          }]
        }
      };
    });

    In the use-case of barcode-scanning, the user is most likely going to use the device’s back-facing camera. This is where choosing a camera up front can enormously improve the user experience.

    Resolution

    Another very important topic when working with video is the actual resolution of the stream. This can be controlled with additional constraints to the video stream.

    var hdConstraint = {
      video: {
        mandatory: {
          width: { min: 1280 },
          height: { min: 720 }
        }
      }
    };

    The above snippet, when added to the video constraints, tries to get a video-stream with the specified quality. If no camera meets those requirements, an ConstraintNotSatisfiedError error is returned in the callback. However, these constraints are not fully compatible with all browsers, since some use minWidth and minHeight instead.

    Autofocus

    Barcodes are typically rather small and must be close-up to the camera in order to be correctly identified. This is where a built-in auto-focus can help to increase the robustness of the detection algorithm. However, the getUserMedia API lacks functionality for triggering the auto-focus and most devices do not even support continuous autofocus in browser-mode. If you have an up-to-date Android device, chances are high that Firefox is able to use the autofocus of your camera (e.g. Nexus 5 or HTC One). Chrome on Android does not support it yet, but there is already an issue filed.

    Performance

    And there is still the question of the performance impact caused by grabbing the frames from the video stream. The results have already been presented in the profiling section. They show that almost 30%, or 8ms of CPU time is consumed for just fetching the image and storing it in a TypedArray instance. The typical process of reading the data from a video-source looks as follows:

    1. Make sure the camera-stream is attached to a video-element
    2. Draw the image to a canvas using ctx.drawImage
    3. Read the data from the canvas using ctx.getImageData
    4. Convert the video to gray-scale and store it inside a TypedArray
    var video = document.getElementById("camera"),
        ctx = document.getElementById("canvas").getContext("2d"),
        ctxData,
        width = video.videoWidth,
        height = video.videoHeight
        data = new Uint8Array(width*height);
     
    ctx.drawImage(video, 0, 0);
    ctxData = ctx.getImageData(0, 0, width, height).data;
    computeGray(ctxData, data);

    It would be very much appreciated if there were a way to get lower-level access to the camera frames without going through the hassle of drawing and reading every single image. This is especially important when processing higher resolution content.

    Wrap up

    It has been real fun to create a project centered on computer vision, especially because it connects so many parts of the web platform. Hopefully, limitations such as the missing auto-focus on mobile devices, or reading the camera stream, will be sorted out in the near future. Still, it is pretty amazing what you can build nowadays by simply using HTML and JavaScript.

    Another lesson learned is that implementing asm.js by hand is both hard and unnecessary if you already know how to write proper JavaScript code. However, if you already have an existing C/C++ codebase which you would like to port, emscripten does a wonderful job. This is where asm.js comes to the rescue.

    Finally, I hope more and more people are jumping on the computer vision path, even if technologies like WebCL are still a long way down the road. The future for Firefox might even be for ARB_compute_shader to eventually jump on to the fast track.

  5. Videos and Firefox OS

    Before HTML5

    Those were dark times Harry, dark times – Rubeus Hagrid

    Before HTML5, displaying video on the Web required browser plugins and Flash.

    Luckily, Firefox OS supports HTML5 video so we don’t need to support these older formats.

    Video support on the Web

    Even though modern browsers support HTML5, the video formats they support vary:

    In summary, to support the most browsers with the fewest formats you need the MP4 and WebM video formats (Firefox prefers WebM).

    Multiple sizes

    Now that you have seen what formats you can use, you need to decide on video resolutions, as desktop users on high speed wifi will expect better quality videos than mobile users on 3G.

    At Rormix we decided on 720p for desktop, 360p for mobile connections, and 180p specially for Firefox OS to reduce the cost in countries with high data charges.

    There are no hard and fast rules — it depends on who your market audience is.

    Streaming?

    The best streaming solution would be to automatically serve the user different videos sizes depending on their connection status (adaptive streaming) but support for this technology is poor.

    HTTP live streaming works well on Apple devices, but has poor support on Android.

    At the time of writing, the most promising technology is MPEG DASH, which is an international standard.

    In summary, we are going to have to wait before we get an adaptive streaming technology that is widely accepted (Firefox does not support HLS or MPEG DASH).

    DIY Adaptive streaming

    In the absence of adaptive streaming we need to try to work out the best video quality to load at the outset. The following is a quick guide to help you decide:

    Wifi or 3G

    Using a certified Firefox OS app you can check to see if the user is on wifi or not.

    var lock    = navigator.mozSettings.createLock();
    var setting = lock.get('wifi.enabled');
     
    setting.onsuccess = function () {
      console.log('wifi.enabled: ' + setting.result);
    }
     
    setting.onerror = function () {
      console.warn('An error occured: ' + setting.error);
    }

    https://developer.mozilla.org/en-US/docs/Web/API/Settings_API

    There is some more information at the W3C Device API.

    Detecting screen size

    There is no point sending a 720p video to a user with a screen smaller than 720p. There are many ways to get the different bounds of a user’s screen; innerWidth and width allow you to get a good idea:

    function getVidSize()
    {
      //Get the width of the phone (rotation independent)
      var min = Math.min($(window).innerHeight(),$(window).innerWidth());
      //Return a video size we have
      if(min < 320)      return '180';
      else if(min < 550) return '360';
      else               return '720';
    }

    http://www.quirksmode.org/m/tests/widthtest.html

    Determining internet speed

    It is difficult to get an accurate read of a user’s internet speed using web technologies — usually they involve loading a large image onto the user’s device and timing it. This has the disadvantage of having to send more data to the user. Some services such as: http://speedof.me/api.html exist, but still require data downloads to the user’s device. (Stackoverflow has some more options.)

    You can be slightly more clever by using HTML5, and checking the time it takes between the user starting the video and a set amount of the video loading. This way we do not need to load any extra data on the user’s device. A quick VideoJS example follows:

    var global_speedcount = 0;
    var global_video = null;
    global_video = videojs("video", {}, function(){
    //Set up video sources
    });
     
    global_video.on('play',function(){
      //User has clicked play
      global_speedcount = new Date().getTime();
    });
     
    function timer()
    {
      var diff = new Date().getTime() - global_speedcount;
      //Remove this handler as it is run multiple times per second!
      global_video.off('timeupdate',timer);
    }
     
    global_video.on('timeupdate',timer);

    This code starts timing when the user clicks play, and when the browser starts to play the video it sends timing information to timeupdate. You can also use this function to detect if lots of buffering is happening.

    Detect high resolution devices

    One final thing to determine is whether or not a user has a high pixel density screen. In this case even if they have a small screen it can still have a large number of pixels (and therefore require a higher resolution video).

    Modernizr has a plugin for detecting hi-res screens.

    if (Modernizr.highresdisplay)
    {
      alert('Your device has a high resolution screen');
    }

    WebP Thumbnails

    Not to get embroiled in an argument, but at Rormix we have seen an average decrease of 30% in file size (WebP vs JPEG) with no loss of quality (in some cases up to 50% less). And in countries with expensive data plans, the less data the better.

    We encode all of our thumbnails in multiple resolutions of WebP and send them to every device that supports them to reduce the amount of data being sent to the user.

    Mobile considerations

    If you are playing HTML5 videos on mobile devices, their behavior differs. On iOS it automatically goes to full screen on iPhones/iPods, but not on tablets.

    Some libraries such as VideoJS have removed the controls from mobile devices until their stability increases.

    Useful libraries

    There are a few useful HTML5 video libraries:

    Mozilla links

    Mozilla has some great articles on web video:

    Other useful Links

  6. Mozilla Hacks gets a new Editor

    Almost three and a half years ago I wrote my first article for Mozilla Hacks and have been the Editor since September 2012. As the face and caretaker of this blog for such a long time, having published 350 posts in two years, I want to take the opportunity to thank you all for reading, and to pass on the torch to its new Editor.

    I’ve been in the same team as Havi Hoffman since I started at Mozilla, and I’m now glad to announce that she is the new Editor of Mozilla Hacks!

    I know there are a number of interesting upcoming articles and I’d strongly recommend you keep on reading this blog. Also make sure to follow it on Twitter: @mozhacks.

    All the comments & interesting discussions we’ve had over the years have been much appreciated, and thanks to all the authors who have taken their valuable time to share their knowledge and to help creating a great Open Web.

    Thanks for now, and see you on the Internet!

  7. Firebug 3 & Multiprocess Firefox (e10s)

    Firebug 3 alpha was announced couple of weeks ago. This version represents the next generation of Firebug built on top of Firefox native developer tools.

    There are several reasons why having Firebug built on top of native developer tools in Firefox is an advantage — one of them is tight integration with the existing platform. This direction allows simple use of available platform components. This is important especially for upcoming multiprocess support in Firefox (also called Electrolysis or E10S).

    From wiki:

    The goal of the Electrolysis project (“e10s” for short) is to run web content in a separate process from Firefox itself. The two major advantages of this model are security and performance.

    The e10s project introduces a great leap ahead in terms of security and performance, as well as putting more emphasis on the internal architecture of add-ons. The main challenge (for many extensions) is solving communication problems between processes. The add-on’s code will run in a different process (browser chrome process) from web page content (page content process) — see the diagram below. Every time an extension needs to access the web page it must use one of the available inter-process communication channels (e.g. message manager or remote debugging protocol). Direct access is no longer possible. This often means that many of the existing synchronous APIs will turn into asynchronous APIs.

    Developer tools, including Firebug, deal with the content in many ways. Tools usually collect a large amount of (meta) data about the debugged page and present it to the user. Various CSS and DOM inspectors not only display internal content data, but also allow the user to edit them and see live changes. All these features require heavy interaction between a tool and the page content.

    So Firebug, built on top of the existing developer tools infrastructure that already ensures basic interaction with the debugged page, allows us to focus more on new features and user experience.

    Firebug Compatibility

    Firebug 2.0 is compatible with Firefox 30 – 36 and will support upcoming non-multiprocess browsers (as well as the recently announced browser for developers).

    Firebug 3.0 alpha (aka Firebug.next) is currently compatible with Firefox 35 – 36 and will support upcoming multiprocess (as well as non-multiprocess) browsers.

    Upgrade From Firebug 2

    If you install Firebug 2 into a multiprocess (e10s) enabled browser, you’ll be prompted to upgrade to Firebug 3 or switch off the multiprocess support.

    Learn more…

    Upgrade to Firebug 3 is definitely the recommended option. You might miss some features from Firebug 2 in Firebug 3 (it’s still in alpha phase) like Firebug extensions, but this is the right time to provide feedback and let us know what the priority features are for you.

    You can follow us on Twitter to be updated.

    Leave a comment here or on the Firebug newsgroup.

    Jan ‘Honza’ Odvarko

  8. MetricsGraphics.js – a lightweight graphics library based on D3

    MetricsGraphics.js is a library built on top of D3 that is optimized for visualizing and laying out time-series data. It provides a simple way to produce common types of graphics in a principled and consistent way. The library supports line charts, scatterplots, histograms, barplots and data tables, as well as features like rug plots and basic linear regression.

    The library elevates the layout and explanation of these graphics to the same level of priority as the graphics. The emergent philosophy is one of efficiency and practicality.

    Hamilton Ulmer and I began building the library earlier this year, during which time we found ourselves copy-and-pasting bits of code in various projects. This led to errors and inconsistent features, and so we decided to develop a single library that provides common functionality and aesthetics to all of our internal projects.

    Moreover, at the time, we were having limited success with our attempts to get casual programmers and non-programmers within the organization to use a library like D3 to create dashboards. The learning curve was proving a bit of an obstacle. So it seemed reasonable to create a level of indirection using well-established design patterns to try and bridge that chasm.

    Our API is simple. All that’s needed to create a graphic is to specify a few default parameters and then, if desired, override one or more of the optional parameters on offer. We don’t maintain state. To update a graphic, one would call data_graphic on the same target element.

    The library is also data-source agnostic. While it provides a number of convenience functions and options that allow for graphics to better handle things like missing observations, it doesn’t care where the data comes from.

    A quick tutorial

    Here’s a quick tutorial to get you started. Say that we have some data on a scholarly topic like UFO sightings. We decide that we’re interested in creating a line chart of yearly sightings.

    We create a JSON file called data/ufo-sightings.json based on the original dataset, where we aggregate yearly sightings. The data doesn’t have to be JSON of course, but that will mean less work later on.

    The next thing we do is load the data:

    d3.json('data/ufo-sightings.json', function(data) {
    })

    data_graphic expects the data object to be an array of objects, which is already the case for us. That’s good. It also needs dates to be timestamps if they’re in a format like yyyy-mm-dd. We’ve got aggregated yearly data, so we don’t need to worry about that. So now, all we need to do is create the graphic and place it in the element specified in target.

    d3.json('data/ufo-sightings.json', function(data) {
        data_graphic({
            title: "UFO Sightings",
            description: "Yearly UFO sightings (1945 to 2010).",
            data: data,
            width: 650,
            height: 150,
            target: '#ufo-sightings',
            x_accessor: 'year',
            y_accessor: 'sightings',
            markers: [{'year': 1964, 
                       'label': '"The Creeping Terror" released'
            }]
        })
    })

    And this is what we end up with. In this example, we’re adding a marker to draw attention to a particular data point. This is optional of course.

    A line chart in MetricsGraphics.js

    A few final remarks

    We follow a real-needs approach to development. Right now, we have mostly implemented features that have been important to us. Having said that, our work is available on Github, as are many of our discussions, and we take any and all pull requests and issues seriously.

    There is still a lot of work to be done. We invite you to take the library out for a spin and file bugs! We’ve set up a sandbox that you can use to try things out without having to download anything: http://metricsgraphicsjs.org

    MetricsGraphics.js v1.1 is scheduled for release on December 1, 2014.

  9. Save the Web – Be a Ford-Mozilla Open Web Fellow

    This is a critical time in the evolution of the Web. Its core ethos of being free and open is at risk with too little interoperability and threats to privacy, security, and expression from governments throughout the world.

    To protect the Web, we need more people with technical expertise to get involved at the policy level. That’s why we created the Ford-Mozilla Open Web Fellowship.

    Photo: Joseph Gruber via Flickr

    What it is

    The Fellowship is a 10-month paid program that immerses engineers, data scientists, and makers in projects that create a better understanding of Internet policy issues among civil society, policy makers, and the broader public.

    What you’ll do

    Fellows will be embedded in one of five host organizations, each of which is leading in the fight to protect the open Web:

    • The American Civil Liberties Union
    • Public Knowledge
    • Free Press
    • The Open Technology Institute
    • Amnesty International

    The Fellows will serve as technology advisors, mentors, and ambassadors to these host organizations, helping to better inform the policy discussion.

    ​Photo: Alain Christian via Flickr

    What you’ll learn

    The program is a great opportunity for emerging technical leaders to take the next step in their careers. Fellows will have the opportunity to further develop their technical skills, learn about critical Internet policy issues, make strong connections in the policy field, and be recognized for their contributions.

    The standard fellowship offers a stipend of $60,000 over 10 months plus supplements for travel, housing, child care, health insurance, moving expenses, and helps pay for research/equipment and books.

    For more information, and to apply by the December 31 deadline, please visit http://advocacy.mozilla.org.

  10. Visually Representing Angular Applications

    This article concerns diagrammatically representing Angular applications. It is a first step, not a fully figured out dissertation about how to visual specify or document Angular apps. And maybe the result of this is that I, with some embarrassment, find out that someone else already has a complete solution.

    My interest in this springs from two ongoing projects:

    1. My day job working on the next generation version of Desk.com‘s support center agent application and
    2. My night job working on a book, Angular In Depth, for Manning Publications

    1: Large, complex Angular application

    The first involves working on a large, complex Angular application as part of a multi-person front-end team. One of the problems I, and I assume other team members encounter (hopefully I’m not the only one), is getting familiar enough with different parts of the application so my additions or changes don’t hose it or cause problems down the road.

    With Angular application it is sometimes challenging to trace what’s happening where. Directives give you the ability to encapsulate behavior and let you employ that behavior declaratively. That’s great. Until you have nested directives or multiple directives operating in tandem that someone else painstakingly wrote. That person probably had a clear vision of how everything related and worked together. But, when you come to it newly, it can be challenging to trace the pieces and keep them in your head as you begin to add features.

    Wouldn’t it be nice to have a visual representation of complex parts of an Angular application? Something that gives you the lay-of-the-land so you can see at a glance what depends on what.

    2: The book project

    The second item above — the book project — involves trying to write about how Angular works under-the-covers. I think most Angular developers have at one time or another viewed some part of Angular as magical. We’ve also all cursed the documentation, particularly those descriptions that use terms whose descriptions use terms whose descriptions are poorly defined based on an understanding of the first item in the chain.

    There’s nothing wrong with using Angular directives or services as demonstrated in online examples or in the documentation or in the starter applications. But it helps us as developers if we also understand what’s happening behind the scenes and why. Knowing how Angular services are created and managed might not be required to write an Angular application, but the ease of writing and the quality can be, I believe, improved by better understanding those kinds of details.

    Visual representations

    In the course of trying to better understand Angular behind-the-scenes and write about it, I’ve come to rely heavily on visual representations of the key concepts and processes. The visual representations I’ve done aren’t perfect by any means, but just working through how to represent a process in a diagram has a great clarifying effect.

    There’s nothing new about visually representing software concepts. UML, process diagrams, even Business Process Modeling Notation (BPMN) are ways to help visualize classes, concepts, relationships and functionality.

    And while those diagramming techniques are useful, it seems that at least in the Angular world, we’re missing a full-bodied visual language that is well suited to describe, document or specify Angular applications.

    We probably don’t need to reinvent the wheel here — obviously something totally new is not needed — but when I’m tackling a (for me) new area of a complex application, having available a customized visual vocabulary to represent it would help.

    Diagrammatically representing front-end JavaScript development

    I’m working with Angular daily so I’m thinking specifically about how to represent an Angular application but this may also be an issue within the larger JavaScript community: how to diagrammatically represent front-end JavaScript development in a way allows us to clearly visualize our models, controllers and views, and the interactions between the DOM and our JavaScript code including a event-driven, async callbacks. In other words, a visual domain specific language (DSL) for client-side JavaScript development.

    I don’t have a complete answer for that, but in self-defense I started working with some diagrams to roughly represent parts of an Angular application. Here’s sort of the sequence I went through to arrive at a first cut:

    1. The first thing I did was write out a detailed description of the problem and what I wanted out of an Angular visual DSL. I also defined some simple abbreviations to use to identify the different types of Angular “objects” (directives, controllers, etc.). Then I dove in began diagramming.
    2. I identified the area of code I needed to understand better, picked a file and threw it on the diagram. What I wanted to do was to diagram it in such a way that I could look at that one file and document it without simultaneously having to trace everything to which it connected.
    3. When the first item was on the diagram, I went to something on which it depended. For example, starting with a directive this leads to associated views or controllers. I diagrammed the second item and added the relationship.
    4. I kept adding items and relationships including nested directives and their views and controllers.
    5. I continued until the picture made sense and I could see the pieces involved in the task I had to complete.

    Since I was working on a specific ticket, I knew the problem I needed to solve so not all information had to be included in each visual element. The result is rough and way too verbose, but it did accomplish:

    • Showing me the key pieces and how they related, particularly the nested directives.
    • Including useful information on where methods or $scope properties lived.
    • Giving a guide to the directories where each item lives.

    It’s not pretty but here is the result:

    This represents a somewhat complicated part of the code and having the diagram helped in at least four ways:

    • By going through the exercise of creating it, I learned the pieces involved in an orderly way — and I didn’t have to try to retain the entire structure in my head as I went.
    • I got the high-level view I needed.
    • It was very helpful when developing, particularly since the work got interrupted and I had to come back to it a few days later.
    • When the work was done, I added it to our internal WIKI to ease future ramp-up in the area.

    I think the some next steps might be to define and expand the visual vocabulary by adding things such as:

    • Unique shapes or icons to identify directives, controllers, views, etc.
    • Standardize how to represent the different kinds of relationships such as ng-include or a view referenced by a directive.
    • Standardize how to represent async actions.
    • Add representations of the model.

    As I said in the beginning, this is rough and nowhere near complete, but it did confirm for me the potential value of having a diagramming convention customized for JavaScript development. And in particular, it validated the need for a robust visual DSL to explore, explain, specify and document Angular applications.