Articles by Christopher Blizzard

Sort by:


  1. Web Open Font Format for Firefox 3.6

    This article was written by John Daggett. John is a Mozilla contributor and has been working hard with font creators and web developers to improve the state of fonts on the web. This article is a high-level overview of whats different and shows some examples of WOFF in use. A full list of other supporting organizations can be found at the official Mozilla Blog.

    In Firefox 3.5 we included support for linking to TrueType and OpenType fonts. In Firefox 3.6 we’re including support for a new font format – the Web Open Font Format, or WOFF. This format has two main advantages over raw TrueType or OpenType fonts.

    1. It is compressed, which means that you will typically see much smaller download sizes compared with raw TrueType or OpenType fonts.
    2. It contains information that allows you to see where the font came from – without DRM or labeling for a specific domain – which means it has support from a large number of font creators and font foundries.

    The WOFF format originated from a collabaration between the font designers Erik van Blokland and Tal Leming with help from Mozilla’s Jonathan Kew. Each had proposed their own format and WOFF represents a melding of these different proposals. The format itself is intended to be a simple repackaging of OpenType or TrueType font data, it doesn’t introduce any new behavior, alter the @font-face linking mechanism or affect the way fonts are rendered. Many font vendors have expressed support for this new format so the hope is this will open up a wider range of font options for web designers.

    Details on Differences between TrueType, OpenType and WOFF

    First, compression is part of the WOFF format so web authors can optimize the size of fonts used on their pages. The compression format is lossless, the uncompressed font data will match that of the original OpenType or TrueType font, so the way the font renders will be the same as the original. Similar compression can be achieved using general HTTP compression but because compression is part of the WOFF format, it’s simpler for authors to use, especially in situations where access to server configuration is not possible.

    Second, the format includes optional metadata so that a font vendor can tag their fonts with information related to font usage. This metadata doesn’t affect how fonts are loaded but tools can use this information to identify the source of a given font, so that those interested in the design of a given page can track down the fonts used on that page. Fonts in WOFF format are compressed but are not encrypted, the format should not be viewed as a “secure” format by those looking for a mechanism to strictly regulate and control font use.

    Note: until Firefox 3.6 ships, users can test the use of WOFF fonts with Firefox nightly builds.


    Below is a simple example that shows how to construct an @font-face rule that links to a WOFF font. To support browsers that only support direct linking to OpenType and TrueType fonts, the ‘src’ descriptor lists the WOFF font first along with a format hint (“woff”), followed by the TrueType version:

    /* Gentium (SIL International) */
    @font-face {
      font-family: GentiumTest;
      src: url(fonts/GenR102.woff) format("woff"),
           url(fonts/GenR102.ttf) format("truetype");
    body {
      font-family: GentiumTest, Times, Times New Roman, serif;

    Structured this way, browsers that support the WOFF format will download the WOFF file. Other browsers that support @font-face but don’t yet support the WOFF format will use the TrueType version. (Note: IE support is a bit trickier, as discussed below). As WOFF is adopted more widely the need to include links to multiple font formats will diminish.

    Other examples below demostrate the use of WOFF formatted fonts but each example has been constructed so that it will work in any browser that supports @font-face, including Internet Explorer.

    A font family with multiple faces

    Using a Postscript CFF font

    African Language Display

    Below is an example of how downloadable fonts can be used to render languages for which font support is usually lacking. The example shows the UN Declaration of Human Rights, translated into two African languages, and how these render with default browser fonts vs. with a downloadable font suited for rendering these languages.

    Note that in one of these examples that the font size goes from a 3.1MB TTF to a 1MB WOFF font and in the other from a 172KB TTF to an 80KB WOFF file.

    Another Postscript CFF font

    An example in Japanese

    Working With Other Browsers

    Firefox 3.6 will be the first shipping browser to support the WOFF format so it’s important to construct @font-face rules that work with browsers lacking WOFF support. One thing that helps greatly with this is the use of format hints to indicate the format of font data before it’s downloaded; browsers that don’t recognize a given format simply skip data in a format they don’t support.

    Internet Explorer, including IE8, only supports the EOT font format and only implements a subset of the @font-face rule descriptors. This makes creating cross-platform @font-face rules that work with IE especially tricky. One solution is to make different rules for IE:

    @font-face {
      font-family: GentiumTest;
      src: url(fonts/GenR102.eot);  /* for IE */
    @font-face {
      font-family: GentiumTest;
      /* Works only in WOFF-enabled browsers */
      src: url(fonts/GenR102.woff) format("woff"); 

    One minor downside of this is that IE doesn’t understand format hints and doesn’t parse @font-face URL’s correctly, it treats format hints as part of the URL, so web authors using the @font-face rules above will see the following in their access logs:

    GET /fonts/GenR102.eot HTTP/1.1" 200 303536
    GET /fonts/GenR102.woff)%20format(%22woff%22) HTTP/1.1" 404 335

    IE successfully pulls down and uses the EOT version of the font but also tries to pull down the WOFF font with the format hint included in the URL. This fails and doesn’t affecting page rendering but it wastes valuable server resources. For more discussion, see Paul Irish’s blog post for one interesting workaround.

    Another problem is that IE currently tries to download all fonts on the page, whether they are used or not. That makes site-wide stylesheets containing all fonts used on site pages difficult, since IE will always try to download all fonts defined in @font-face rules, wasting lots of server bandwidth.

    Further Resources


    Latest draft WOFF specification
    Original blog post on using @font-face
    CSS3 Fonts working draft
    MDC @font-face documentation


    Jonathan Kew’s sample encoding/decoding code
    woffTools – tools for examining and validating WOFF files
    FontTools/TTX – Python library and tool for manipulating font data
    Web-based font subsetting tool

    General @font-face Examples

    CSS @ Ten: The Next Big Thing
    Example layout using Graublau Sans
    Examples of Interesting Web Typography
    The Elements of Typographic Style Applied to the Web

    Font Resources

    Font Squirrel
    10 Great Free Fonts for @font-face
    40 Excellent Free Fonts by Smashing Magazine

  2. 5 years of Firefox

    5 Years of Firefox Cake at the Firefox Developer Day in Tokyo, Japan

    5 Years of Firefox Cake at the Firefox Developer day in Tokyo, Japan

    Firefox is five years old. We thought that we would celebrate that by talking about how the web has changed over the last five years and Firefox’s role in those changes.

    Where We’re At

    2009 has been an interesting year. We’re at a crossroads for the Internet. In the next 12 months or so we’re likely to see regulation of the Internet in the United States – possibly for good, possibly for bad. We’ve seen increased interest in the browser space with the entrace of Google with their minimalist Chrome browser. Mozilla put a vastly improved rendering engine into the hands of hundreds of millions of users with the release of Firefox 3.5. The EU is working with Microsoft to implement a ballot to make users aware of browser choice. No one could possibly say that things are boring right now. And this has only been over the last year.

    But what has changed over the last five years? What are the main themes? We’ve picked a few to talk about and we hope that it helps put things into perspective for the next five.

    The Rise of the Modern Browser

    One thing that’s become obvious over the last five years is the wide gap that’s emerging between the field of modern browsers – Firefox, Safari, Opera and Chrome – with the world’s most popular browser – IE. The modern browser is built for the future of web applications – super fast JavaScript, modern CSS, HTML5, support for the various web-apps standards, downloadable font support, offline application support, raw graphics through canvas and WebGL, native video, advanced XHR capabilities mixed with new security tools and network capabilities.

    Over the last five years we’ve been setting ourselves up for the next five. The web is moving faster, not slower, and modern browsers are set to handle it.

    In this sense we’ve done our jobs at Mozilla. We were first on the scene with fast JavaScript, CORS, mixing HTML and SVG, WebGL is based on Canvas3D work we pioneered, we’re scripting hardware with geolocation and orientation. We’re helping to standardize and implement some new CSS capabilities that are being developed in other browsers, we’re leading the web towards a modern font system and giving web authors and users more security tools. Our job is to help keep the web rich and moving forward – this is a huge part of our public benefit mission. This is the opportunity that Firefox’s five years have offered us.

    The browsers that are on the horizon aren’t just incremental changes – they represent the pieces to build the next generation web – rich with standards-based graphics, new JavaScript libraries and full blown applications.

    Standards Won

    Firefox’s growth on the web has had another important effect – bringing standards to the forefront of development. Very early in the Mozilla project almost half of the web’s HTML pages started using DOCTYPE in order to opt-in to standards mode for many web browsers. Developers signaled that they wanted to use a standards-based method for development.

    That’s important. It set up the current frame for development on the web that we have today. It allowed Apple to take KHTML and turn it into Safari which then allowed Chrome to pick up that work and enter the market and render a standards-based web. Now we don’t have just one or two browsers, but many, and a lot of that has to do with the way that early web developers approached development.

    Standards matter, and they should continue to matter. When they do those individual human beings we like to call users benefit with greater choice and fast innovation.

    Customizing Your Experience

    Led by Firefox’s add-ons system there’s been an explosion in the number of people who are customizing their experiences – both in browsers and on the web. Anywhere from one third to one half of Firefox users have some kind of add-on installed.

    The web is unique, and was built to be hacked. No other widely-deployed system in the world delivers itself as source code like the web does. And this transparency has made it possible for the distributed innovation that we’re seeing in Firefox and on the web. People patching new UI into their favorite web sites, mashing up data from multiple sources or radically changing the feel of the browser itself – this is a source for inspiration for browser vendors and web site operators alike. For the first time individual people have the ability to take an active part in the future of their computing experience.

    It’s also worth noting that Gecko and Firefox are unique in this space. The highly modular nature of Gecko mixed with the fact that Firefox itself is written in HTML and XUL (another UI-focused markup language) means that it’s the only browser that’s hackable like the web is. Every other browser is built as a monolithic desktop application from the last millennium. This natural advantage not only means that Firefox has the widest array of add-ons and developers, but is also a source of inspiration for most of the rest of the market.

    RSS and Data

    In the last five years one of the big changes we’ve seen is web sites offering up data and feeds. Feeds in particular have reached the point where even non-technical people know what they are and how to use them. The ubiquitous RSS icon, which was originally created for the Firefox browser and given away by Mozilla, now exists on millions of web sites offering users the ability to get updates on their terms.

    But we’ve gone far beyond just simple feeds. Advanced APIs are now appearing for web sites so you can integrate native applications, build a Firefox extension or be able to pull your data out of a web site.

    And we’ve also moved from the promise of XML to the reality of JSON as the data format of choice.


    It’s important to remember that Youtube didn’t exist when Firefox was launched. At that time your only options were native QuickTime, Windows Media or Real Player. (Anyone remember Real Player?)

    In the last few years we’ve seen Youtube become one of the largest sites of the Internet, the launch of Hulu, and sites like Netflix offering premium on-demand video right over the Internet to web browsers and devices alike. We’ve also seen millions of people create their own videos and publish them to the web.

    We’ve also seen the launch of open video and native video support in browsers to bring the creativity and hackability of the web to currently closed video platforms.

    Users as Creators

    The rise of the web is a story of anyone being able to create a web site. But that’s still a largely technical exercise, even with tools. What we have seen, thanks to tools like WordPress and blogger, is the growth of weblogs, feeds and data which make it possible for anyone with a web browser to become a publisher or journalist.

    And it has moved well beyond just text. People with low-cost tools are making movies and posting them. Remix culture is alive and well, creating comentary and new and exciting creations – all in the hands of pretty normal people.


    The iPhone taught us that you could build a decent browser for mobile phones and that data was important. Phones, really just in the last five years or so, have shown us that access to data plans that look like what you can get to your house can unleash developer and user creativity.

    In the last five years at Mozilla we’ve also made the commitment to build a browser for mobile devices. We’re still in an early pre-1.0 beta stage, but that browser is already getting excellent reviews.

    So what about the next five years?

    Mozilla has been at the heart of many of the issues of the Internet over the last five years. We’ve vastly improved the browsing experience for hundreds of millions of people around the world. We’ve managed to keep Microsoft honest and forced them to release newer versions of their browsers. Firefox’s presence was a large factor in Apple being able to ship a browser to its user base as the Mac came back to the market. We’ve made it possible for third party browser vendors like Google to enter the market. We’ve proven that people care about improving their experiences on the web. We’ve given over 330 million people the taste of what it’s like to use an open source product. And we’ve overseen the technical growth of the web through direct action and standardization.

    It’s hard to beat that, but we’re going to try. We’ll continue to make competitive browser releases and improve people’s experiences on the web. We’ll continue to innovate on behalf of developers and bring those improvements to the standards space. And we’ll continue to grow our amazing global community of users, developers and activists.

    Over the next five years everyone can expect that the browser should take part in a few new areas – to act as the user agent it should be. Issues around data, privacy and identity loom large. You will see the values of Mozilla’s public benefit mission reflected in our product choices in these areas to make users safer and help them understand what it means to share data with web sites.

    Expect to see big changes in the video space. HTML5-based video and open video codecs are starting to appear on the web as web developers make individual choices to support a standards-based, royalty-free approach. Expect to see changes in the expectations around the licensing of codecs.

    And over the next five years mobile will play an increasingly important role in our lives, and in the future of the web. The decisions of users, carriers, governments and the people who build phones will have far-reaching effects on this new extension to the Internet and how people will access information for decades to come.

    Mozilla has a unique place on the Internet. Driven to help improve it as part of our mission expect us to express opinions on decisions that affect its future. We act both through direct action but also through indirect action – sometimes our effects are as important as our actions. We will continue to protect users and we’ll continue to do everything they can to make it possible for the next set of people to come along and build the next great web site.

    It’s been a great five years. Let’s make it another five and keep the web moving forward for the benefit of everyone.

  3. privacy-related changes coming to CSS :visited

    For more information about this, have a look at David Baron’s post, the bug and the post on the security blog.

    For many years the CSS :visited selector has been a vector for querying a user’s history. It’s not particularly dangerous by itself, but when it’s combined with getComputedStyle() in JavaScript it means that someone can walk through your history and figure out where you’ve been. And quickly – some tests show the ability to test 210,000 URLs per minute. At that rate, it’s possible to brute force a lot of your history or at least establish your identity through fingerprinting. Given that browsers often keep history for a long time it can reveal quite a bit about where you’ve been on the web.

    At Mozilla we’re serious about protecting people’s privacy, so we’re going to fix this problem for our users. To do so we’re making changes to how :visited works in Firefox. We’re not sure what release this will be part of yet and the fixes are still making their way through code review, but we wanted to give a heads up to people as soon as we understood how we wanted to approach fixing this.

    These changes will have some impact on web sites and developers, so you should be aware of them. At a high level here’s what’s changing:

    • getComputedStyle (and similar functions like querySelector) will lie. They will always return values as if a user has never visited a site.
    • You will still be able to visually style visited links, but you’re severely limited in what you can use. We’re limiting the CSS properties that can be used to style visited links to color, background-color, border-*-color, and outline-color and the color parts of the fill and stroke properties. For any other parts of the style for visited links, the style for unvisited links is used instead. In addition, for the list of properties you can change above, you won’t be able to set rgba() or hsla() colors or transparent on them.

    These are pretty obvious cases that are used widely. There are a couple of subtle changes to how selectors work as well:

    • If you use a sibling selector (combinator) like :visited + span then the span will be styled as if the link were unvisited.
    • If you’re using nested link elements (rare) and the element being matched is different than the link whose presence in history is being tested, then the element will be drawn as if the link were unvisited as well.

    These last two are somewhat confusing, and we’ll have examples of them up in a separate post.

    The impact on web developers here should be minimal, and that’s part of our intent. But there are a couple of areas that will likely require changes to sites:

    • If you’re using background images to style links and indicate if they are visited, that will no longer work.
    • We won’t support CSS Transitions that related to visitedness. There isn’t that much CSS Transition content on the web, so this is unlikely to affect very many people, but it’s still worth noting as another vector we won’t support.

    We’d like to hear more about how you’re using CSS :visited and what the impact will be on your site. If you see something that’s going to cause something to break, we’d like to at least get it documented. Please leave a comment here with more information so others can see it as well.

  4. an update on open video codecs and quality

    Two days ago we posted a comparison by Greg Maxwell of low and medium resolution YouTube videos vs. Theora counterparts at the same bit rates. The result in that test was that Theora did much better at the low bit rate and more or less the same at the slightly higher bit rate. The conclusion being that Theora is perfectly appropriate for a site like YouTube.

    Now Maik Merten has done the same with videos at HD resolution, comparing videos encoded by YouTube and a video encoded with the new Theora encoder at a managed bitrate. The results? Go have a look at the images in the post. Tell us if you can honestly see a major difference. We can’t.


  5. color correction for images in Firefox 3.5

    Back in Firefox 3, we introduced support for color profiles in tagged images, but it was disabled by default. In Firefox 3.5 we were able to make the color correction process about 5x faster than it was in Firefox 3 so we’ve enabled support for color correction for tagged images.

    Most images on the web are untagged. If you don’t know the difference between tagged images and untagged images the odds are good are you won’t notice this change. However, we suggest that you read on to learn about what it might mean for you if you want to include them and how future Firefox releases might change the interactions between CSS colors and images.

    What’s a color profile?

    People who spend a lot of time taking photographs or any kind of high-resolution color printing will understand that many output devices – LCDs, CRTs, paper etc – all have very different interpretations of what various colors mean. For example, uncorrected red will look very different on an LCD vs. a CRT. You can see this if you set up two very different monitors next to each other and the operating system doesn’t do color correction – colors will look more washed out in one of them vs. the other.

    JPG and PNG images both have support for color profiles. These color profiles allow Firefox to take colors in the image and translate them into colors that are independent of any particular device.

    While images contain color profiles it’s also important to note that output devices like monitors also have color profiles. As an example an output device may be better at displaying reds than blues. When you’re getting ready to show something that’s “red” on that device it might need to be converted from the neutral #ff0000 value to #f40301 in order to show up as red on the screen.

    What this means is that there are actually two conversions that take place with color profiles. The first is to take the original color information in the image and, using the color profile, convert it into a device-independent color space. Then once it’s in that independent space you convert it again using the output device’s color profile to get it ready to display on the output device.

    So what about CSS colors?

    It’s important to understand how color profiles work and how they are converted to properly understand how CSS interacts with these color spaces.

    In Firefox 3.5 we consider CSS colors to already be in the device output’s color space. Another way of saying this is that CSS colors are not in the neutral color space and are not converted into the output device like tagged images are.

    What this means is that if you have a tagged image where a color is expected to match the CSS that’s right next to it, it won’t. Or at least it’s likely that it won’t on some output device – maybe not the one that you happen to be using for development. Remember that different output devices have different color profiles. Here’s an example of what that looks like:

    In Firefox 3, this will look like one contiguous block of purple. In Firefox 3.5 and Safari you will notice that there’s a purple box within a purple box (unless your system profile is sRGB.) This is because the image is color corrected while the surrounding CSS is not.

    This is where the statement about the future comes in. In a future release of Firefox we are likely to make it possible for people to turn on color correction for tagged images and CSS. You can test this setting today by changing the pref listed on the page on color correction on the Mozilla Developer Center to “Full color management.” In that case untagged images should continue to work as we will be rendering both CSS and untagged images in the sRGB color space.

    Image support and tools

    PNG’s can be tagged in three different ways. First they can have an iCCP chunk that contains the associated ICC profile. Second they can be explicitly tagged as sRGB using a sRGB chunk. Finally, they can contain gAMA and cHRM chunks that describe the image’s gamma and chromaticies. Using any of thse methods will cause Firefox to color correct the image.

    You can remove all of the color correction chunks resulting in an untagged image using pngcrush:

    pngcrush -rem gAMA -rem cHRM -rem iCCP -rem sRGB infile.png outfile.png

    Alternatively, you can use TweakPNG and delete the gAMA, cHRM, iCCP and sRGB chunks by hand.

  6. Firefox 4 Beta 9 – a huge pile of awesome

    Hello, and welcome to the post for Firefox 4 beta 9. If you’re reading this then you’re interested in what we’ve got coming down the pipe for the latest beta for the wonderful browser known as Firefox 4. We’re starting to reach the end of our development cycle for the next release of Firefox and I thought it might be worth it to give a full overview of everything that we’ve got in store for web developers vs. Firefox 3.6. So if you’ve read previous releases, some of this will look familiar. But this is a new beta and we’ve been busy, so there’s something new in here for everyone.


    One of Firefox 4′s main themes is Performance. As such, we’ve done a huge amount of work to improve browser performance across the board, from startup time, to JavaScript performance, to interactive performance and making the user interface easier and more efficient to drive. That being said, here are some of the things we’ve done to improve performance up to Beta 9.

    The JaegerMonkey has landed.

    And you might have noticed that it’s really fast. This is the world’s first third-generation JavaScript engine, using Baseline JIT technology similar to engines found in other browsers and kicked up a level with the Tracing engine found in Firefox 3.6. As such, we’re competitive on benchmarks such as Sunspider and V8, but we’re also fast at a whole mess of things that we expect to see in the set of next-generation web applications, hence Kraken.

    Hardware Acceleration

    Firefox 4 supports full hardware acceleration on Windows 7 and Windows Vista via a combination of D2D, DX9 and DX10. This allows us to accelerate everything from Canvas drawing to video rendering. Windows XP users will also enjoy hardware acceleration for many operations because we’re using our new Layers infrastructure along with DX9. And, of course, OSX users have excellent OpenGL support, so we’ve got that covered as well.


    Compartments, added in Beta 7, is infrastructure that we’ve added to Firefox to improve performance, give us a better base for security and, over the long term, give us better responsiveness because of its positive effects on garbage collection. (Note that there are some parts of Compartments that still have not landed.) If you want to learn about why Compartments are so important, have a look at Andreas Gal’s great post on the topic. He does a great job of explaining what they are and how they work.

    Improved DOM performance and style resolution

    We’ve made a huge number of improvements to our overall DOM and style resolution performance as well. Although JavaScript gets much of the focus these days, it turns out that in page load and interactive performance, the DOM and style resolution (usually as part of reflowing a document) often affect page performance more than JavaScript. And as such we’ve seen improvement across the board in this area with Firefox 4. In some tests we’re a solid 2x faster than Firefox 3.6.

    JavaScript typed arrays

    If you’re doing processing with JavaScript and you want to process low-level data, Firefox now supports native typed arrays in JavaScript. This can make a gigantic difference in how fast items are processed if you’re doing WebGL or image manipulation.

    In addition, if you’re getting data from an XHR request, you can get that data as a JavaScript native typed array for fast processing with mozResponseArrayBuffer.

    JavaScript animation scheduling APIs

    We now support events that let you schedule animations from JavaScript. Doing this means you aren’t wasting CPU and battery by running your animations as fast as possible when the underlying rendering engine isn’t going to draw them anyway. (Gecko now has an internal timer for all drawing and will render at a maximum of 60fps, the edge of human perception. It will also turn this timer down to 1fps for tabs that aren’t in the foreground. Use it.) This is most useful for people building libraries, but you should be aware that there’s a better way to animate in Gecko.

    Retained layers

    As mentioned above, Firefox now has an internal Layers system for improved drawing performance for many types of web pages. Images, fixed backgrounds, inline video, etc. are now rendered to an internal layer and then composited with other parts of a web page in a way that can take advantage of hardware acceleration. This improves the page load time and interactive performance of many web pages.

    Asynchronous plug-in painting

    On Windows and Linux we now paint plug-ins asynchronously. In previous releases when we wanted to paint a web page that included a plug-in, we would have to ask the plug-in for the data to paint. If a plug-in was slow or was hung, it would make the browser feel slow. Now we’re able to get that data asynchronously, which means that the overall responsiveness of the browser should be improved.

    Vastly improved cache

    Firefox 4 has a vastly improved disk cache. This disk cache should result in faster start-up performance and runtime performance. But why mention this here? Our expectation is that site owners should see much higher cache hit rates with Firefox 4 as we dynamically set the cache size based on how much space is free on an end-user’s hard drive.


    Firefox 4 now has WebGL enabled by default. Based on the original 3-D Canvas work by Vladimir Vukićević, this is being widely implemented in browsers. The WebGL spec is on the short path to a 1.0 release and we’re very excited to see this be used in the wild.

    Want a demo? Try the amazing Flight of the Navigator where we combine WebGL with a number of other technologies in Firefox 4 — HTML5 video, the Audio API, all remixed with live data from the web.


    The new JaegerMonkey engine in Firefox 4 isn’t just faster, it’s also got some new tricks as well. Our goal is to bring new tools to developers and that includes evolving the JavaScript language. Here are two examples of where we’ve made other improvements:

    ECMAScript 5

    Firefox 4 Beta 9 contains much of the new version of the JavaScript language. Our goal with Firefox 4 is to have industry-leading support for ECMAScript 5, including the new strict mode. You can track our progress on the ES5 bug.

    Web Console

    Firefox 4 will include the Web Console. This is a new tool that will let you inspect a web page as it’s running, see network activity, see messages logged with console.log, see warnings for a page’s CSS, and a number of other things.

    Note this that is something that we’ll be including with Firefox 4 directly. It’s not an add-on.

    (Also Firebug will be ready for the final Firefox 4 release.)


    Firefox has always had excellent support for HTML5, even as far back as Firefox 3.5. Firefox 4 builds on that history with a bunch of new HTML5-related features.


    We’ve started to implement much of HTML5 forms. We include support for new input types, datalist support, new input attributes like autofocus and placeholder, decoupled forms, form options, validation mechanisms, constraint validation and new CSS selectors to bind them all together. There’s a great post up on the Hacks site covering our HTML5 forms support in Firefox 4. If you want more information, check it out.


    Firefox 4 includes an HTML5-ready parser. This parser brings with it some new capabilities, most notably inline SVG, but also improves performance by running the parsing algorithm on its own processor. It also brings our parser algorithm closer to the standard and lays the foundation for consistent parsing across browsers.

    Support for WebM

    Firefox 4 includes support for WebM, the new royalty-free format for video on the web. It works on YouTube if you join their HTML5 beta. It also works with embedded YouTube videos if you use their new iframe embedding API. (Join the beta and see an example at the bottom of this post.)

    Video Buffer API

    We now support the buffered attribute for HTML5 video. This means it’s possible to give users an accurate measure of how much video has been buffered instead of having to guess from the download rate and your current position in the stream.

    Video “preload” support

    We’ve replaced the “autobuffer” attribute for HTML5 video with the new “preload” attribute. This attribute gives you much better control over how videos are pre-buffered if they are included on a page rather than the old on/off system that was included with Firefox 3.5.

    History pushState and replaceState

    Firefox 4 now supports the HTML5 pushState and replaceState history modification calls. These allow you to create or modify the browser navigation history. These are extremely useful for applications that might want to generate history entries using the hash after the URL (useful for HTML-based slides, for example.) Other sites have been using these APIs to hide history information so that site-internal navigation aids aren’t revealed as part of HTTP-referrers.

    Audio sampling and generation API

    Firefox 4 includes the Firefox Audio Data API. This API allows you read, modify and write data from audio and video elements. This is now being widely experimented with and people are building some very beautiful things with it.


    Firefox 4 includes a bunch of new DOM functionality, in addition to the performance improvements I mentioned before.

    Added support for .click() to the File upload control

    You can now call .click() on a hidden File control in order to bring up the platform file upload widget. This means you don’t have to expose the (ugly) file upload control; instead you can build your own. If you combine this with the new File APIs and progress events and you’ve got a recipe for a very nice file upload experience.

    Support for .slice in the File API

    We now support the Blob API and the .slice APIs that come with it. This is useful for people who want to want to process small parts of an otherwise large File object from JavaScript. You no longer have to load the entire file into memory.

    It’s also useful for people who want to reliably upload large files. With some server and JS code you can now split a large file into small sections and upload those chunks, including re-retrying failed sections, or even uploading several sections in parallel.

    Support for File API URLs

    We now support the .url attribute for the File API. This lets you take objects that you’ve gotten through the File API and use them for images, video, HTML or other URL-powered objects.

    Touch and multi-touch events

    Firefox 4 supports both touch and multi-touch events, exposed to the DOM. Support for this functionality is available on Windows 7 systems.

    Detect click-to-touch

    You can now tell if a mouse or finger generated an event with the mozInputSource property. This is useful with the touch and multi-touch events and makes it possible to build apps that treat touch and mouse input differently.


    Firefox 4 will include a super-early version of IndexedDB. This emerging standard for local storage is still undergoing change and will be private-prefixed until it’s stable. We have some early API docs on IndexedDB, but the spec and implementation we have in Firefox 4 have both changed since the docs were last updated. As we get closer to release time we’ll be updating the docs and talk more about the implementation. Meanwhile, you can read the IndexedDB primer for an overview of using the IndexedDB API.


    Firefox 4 includes support for the new FormData object, which makes it easier to interact with HTML forms. It also enables some new capabilities, like making it easy to upload a file as part of a form accessed via the File API.

    SVG Animation and SMIL

    In Firefox 4 you can now animate SVG with SMIL.

    SVG as Images and CSS backgrounds

    You can now use SVG as the source of an <img> tag. You can even animate them with SMIL. And you can also use SVGs as backgrounds in CSS.

    Get a Canvas as a File object

    Many people wanted a Canvas accessible as a file object for uploads and other purposes. You can now use the mozGetAsFile() method on a canvas and it will return an image file.

    Resizable text areas

    If you’ve been using a beta, you’ve noticed that text areas are now resizable by default. You can disable that with the new resize CSS property.


    Firefox 4 includes a huge pile of new CSS properties and promotions from the private CSS namespace into the final namespace due to the maturation of some of the standards in this space.

    CSS Transitions

    Firefox 4 will include support for CSS Transitions. Since the spec is still early, these are still -moz prefixed extensions.


    We now support an early version of calc private-namespaced as -moz-calc. This will let you use simple expressions anywhere you would normally want to use a length. This can make the CSS layout of pages much simpler. (No more divs for spacing!)


    We now support a new extremely useful CSS extension: -moz-any() selector grouping. This will be very useful in HTML5 contexts. Please read the post for more information.


    The -moz-element is an extension to the background-image property that lets you use any element as the background for another element. This is hugely useful & powerful and we hope that other browsers will adopt it as well.


    The :-moz-placeholder allows you to change the attributes of background text that’s a placeholder in an HTML5 form. This allows you to change the color or other attribute of the text. This can be very useful if you’ve changed the style of the actual text box and you want the background text to match.


    The border-radius attribute is now supported, replacing the old -moz-border-radius attribute.


    box-shadow has replaced -moz-box-shadow.


    Firefox 4 will include support for exposing much more of the capabilities of TrueType fonts with the -moz-font-feature-settings property. It’s possible to take advantage of all kinds of font capabilities — kerning, ligatures, alternates, real small caps, and stylistic sets, to name just a few.

    Consistent CSS units

    We’ve changed our handling of pixel sizes to match Internet Explorer, Safari and Chrome so that 1 inch always equals 96 pixels. See Robert’s post for more information on these changes.

    Support for a physical CSS unit

    We’ve introduced a new CSS unit, mozmm for the rare instance where you actually want a physical size to be used. Once again, see Robert’s post for more information on this new unit.


    Firefox now supports the -moz-device-pixel-ratio media query that gives you the number of device pixels per CSS pixel.


    As mentioned above, we now have a resize CSS property that lets you disable resizable input text areas.


    We now support the -moz-tab-size property that lets you specify the width in space characters of how to render a tab character (U+0009) when rendering text.


    The new CSS pseudo-selector, -moz-focusring lets you specify the appearance of an element when it’s focused and it would have a focus ring drawn around it. Different operating systems have different conventions for when to draw a focus ring or not and this lets you control the look of form controls while keeping to platform conventions.


    The new -moz-image-rect lets you use a sub-rectangle of an image as part of a background or background-image.


    Last but not least, Firefox 4 supports a huge number of new security tools, useful for both users and web developers alike. Here’s a quick overview of the new technologies we’re delivering:

    Content Security Policy

    Content Security Policy (CSP) is a set of tools that web developers can use to help prevent a couple different classes of attacks. In particular, it offers tools to mitigate cross-site scripting attacks, click-jacking and packet sniffing attacks.

    Another important piece of CSP is that when one of the rules is violated, Firefox can send back information about that violation to the web site, making it a useful canary to improve security for other browsers too.


    Firefox 4 now supports the X-Frame-Options header, one defense against clickjacking. This response header is supported by other browsers as well. (This was also delivered as part of a Firefox 3.6 update, but is worth calling out since it’s new since the original 3.6 release.)

    HSTS (ForceTLS)

    Firefox 4 supports the obtusely-named HTTP Strict Transport Security (HSTS) headers. You can use these headers to tell the browser that it should never, ever contact a site over unencrypted HTTP.

    Firefox users can also use the STS UI add-on to add and remove sites from the HSTS list, even sites that don’t natively support HSTS.

    CORS Improvements

    We’ve fixed some bugs in our CORS implementation.

    :visited changes

    Firefox 4 includes the changes required to help improve your privacy online by closing a decade-old hole in CSS rules that let any website query your browsing history. These changes have also been picked up by WebKit-based browsers and we’ve heard rumors that IE 9 might pick up this important change as well.

    That’s a lot of stuff

    Yes, it really is! We hope that you like it.

  7. Firefox, YouTube and WebM

    Five important items of note today relating to Mozilla’s support for the VP8 codec:

    1. Google will be releasing VP8 under an open source and royalty-free basis. VP8 is a high-quality video codec that Google acquired when they purchased the company On2. The VP8 codec represents a vast improvement in quality-per-bit over Theora and is comparable in quality to H.264.

    2. The VP8 codec will be combined with the Vorbis audio codec and a subset of the Matroska container format to build a new standard for Open Video on the web called WebM. You can find out more about the project at its new site:

    3. We will include support for WebM in Firefox. You can get super-early WebM builds of Firefox 4 pre-alpha today. WebM will also be included in Google Chrome and Opera.

    4. Every video on YouTube will be transcoded into WebM. They have about 1.2 million videos available today and will be working through their back catalog over time. But they have committed to supporting everything.

    5. This is something that is supported by many partners, not just Google and others. Content providers like Brightcove have signed up to support WebM as part of a full HTML5 video solution. Hardware companies, encoding providers and other parts of the video stack are all part of the list of companies backing WebM. Even Adobe will be supporting WebM in Flash. Firefox, with its market share and principled leadership and YouTube, with its video reach are the most important partners in this solution, but we are only a small part of the larger ecosystem of video.

    We’re extremely excited to see Google joining us to support Open Video. They are making technology available on terms consistent with the Open Web and the W3C Royalty-Free licensing terms. And – most importantly – they are committing to support a full open video stack on the world’s largest video site. This changes the landscape for video and moves the baseline for what other sites have to do to maintain parity and keep up with upcoming advances in video technology, not to mention compatibility with the set of browsers that are growing their userbase and advancing technology on the web.

    At Mozilla, we’ve wanted video on the web to move as fast as the rest of the web. That has required a baseline of open technology to build on. Theora was a good start, but VP8 is better. Expect us to start pushing on video innovation with vigor. We’ll innovate like the web has, moving from the edges in, with dozens of small revolutions that add up to something larger than the sum of those parts. VP8 is one of those pieces, HTML5 is another. If you watch this weblog, you can start to see those other pieces starting to emerge as well. The web is creeping into more and more technologies, with Firefox leading the way. We intend to keep leading the web beyond HTML5 to the next place it needs to be.

    Today is a day of great change. Tomorrow will be another.

  8. Firefox 3.5 for 35 days – dreaming about the future of the web

    35 days logo

    Over the next 35 days we’ll be talking about all of the new developer features in Firefox 3.5.

    The upcoming release of Firefox 3.5 is a big upgrade for users.  It includes new privacy features, improvements in interactive performance and a new JavaScript engine that will improve the experience for users using script-heavy web sites.  These are all features that users will appreciate and talk about.

    But what Firefox 3.5 really represents is a huge upgrade to the web itself.  We’ve included support for audio, video, threads in JavaScript, new canvas features, tons of new CSS features, downloadable fonts and geolocation.  While Firefox 3 was a signifigant upgrade for the web’s users, Firefox 3.5 does the same for developers.

    Because of the sheer volume of new features for developers in Firefox 3.5 we’ll be spending the next 35 days featuring all of them, with two posts a day, every day.  Each day’s first post will describe one new feature in Firefox in depth – how to use it, why it’s there and some examples of what it can do for you.  The second post will be all fun – examples of demos that people have put together using all of the features that are in Firefox 3.5.

    Like much of what we do at Mozilla this is a community-led project. We’ve reached out to members of the Mozilla and web developer community for posts and demos.  Some posts will be written by core Mozilla hackers, but many of them will be written by people who are just excited about the chance to use this technology in the wild.

    My personal belief is that these posts represent Mozilla’s dreams for the future of the web.  We are not a company like every other browser vendor – our core mission is to improve and protect the web.  Firefox 3.5 represents what we think the next stage of the web will be – interactive applications, video, web sites collaborating and sharing information – all via the browser.  Our hope is that each of you will take what we post here, encourage other browser vendors to implement similar features where they haven’t already, and do your part to keep the web vibrant.

  9. an overview of TraceMonkey

    This post was written by David Mandelin who works on Mozilla’s JavaScript team.

    Firefox 3.5 has a new JavaScript engine, TraceMonkey, that runs many JavaScript programs 3-4x faster than Firefox 3, speeding up existing web apps and enabling new ones. This article gives a peek under the hood at the major parts of TraceMonkey and how they speed up JS. This will also explain what kinds of programs get the best speedup from TraceMonkey and what kinds of things you can do to get your program to run faster.

    Why it’s hard to run JS fast: dynamic typing

    High-level dynamic languages such as JavaScript and Python make programming more productive, but they have always been slow compared to statically typed languages like Java or C. A typical rule of thumb was that a JS program might be 10x slower than an equivalent Java program.

    There are two main reasons JS and other dynamic scripting languages usually run slower than Java or C. The first reason is that in dynamic languages it is generally not possible to determine the types of values ahead of time. Because of this, the language must store all values in a generic format and process values using generic operations.

    In Java, by contrast, the programmer declares types for variables and methods, so the compiler can determine the types of values ahead of time. The compiler can then generate code that uses specialized formats and operations that run much faster than generic operations. I will call these type-specialized operations.

    The second main reason that dynamic languages run slower is that scripting languages are usually implemented with interpreters, while statically typed languages are compiled to native code. Interpreters are easier to create, but they incur extra runtime overhead for tracking their internal state. Languages like Java compile to machine language, which requires almost no state tracking overhead.

    Let’s make this concrete with a picture. Here are the slowdowns in picture form for a simple numeric add operation: a + b, where a and b are integers. For now, ignore the rightmost bar and focus on the comparison of the Firefox 3 JavaScript interpreter vs. a Java JIT. Each column shows the steps that have to be done to complete the add operation in each programming language. Time goes downward, and the height of each box is proportional to the time it takes to finish the steps in the box.

    time diagram of add operation

    In the middle, Java simply runs one machine language add instruction, which runs in time T (one processor cycle). Because the Java compiler knows that the operands are standard machine integers, it can use a standard integer add machine language instruction. That’s it.

    On the left, SpiderMonkey (the JS interpreter in FF3) takes about 40 times as long. The brown boxes are interpreter overhead: the interpreter must read the add operation and jump to the interpreter’s code for a generic add. The orange boxes represent extra work that has to be done because the interpreter doesn’t know the operand types. The interpreter has to unpack the generic representations of a and i, figure out their types, select the specific addition operation, convert the values to the right types, and at the end, convert the result back to a generic format.

    The diagram shows that using an interpreter instead of a compiler is slowing things down a little bit, but not having type information is slowing things down a lot. If we want JS to run more than a little faster than in FF3, by Amdahl’s law, we need to do something about types.

    Getting types by tracing

    Our goal in TraceMonkey is to compile type-specialized code. To do that, TraceMonkey needs to know the types of variables. But JavaScript doesn’t have type declarations, and we also said that it’s practically impossible for a JS engine to figure out the types ahead of time. So if we want to just compile everything ahead of time, we’re stuck.

    So let’s turn the problem around. If we let the program run for a bit in an interpreter, the engine can directly observe the types of values. Then, the engine can use those types to compile fast type-specialized code. Finally, the engine can start running the type-specialized code, and it will run much faster.

    There are a few key details about this idea. First, when the program runs, even if there are many if statements and other branches, the program always goes only one way. So the engine doesn’t get to observe types for a whole method — the engine observes types through the paths, which we call traces, that the program actually takes. Thus, while standard compilers compile methods, TraceMonkey compiles traces. One side benefit of trace-at-a-time compilation is that function calls that happen on a trace are inlined, making traced function calls very fast.

    Second, compiling type-specialized code takes time. If a piece of code is going to run only one or a few times, which is common with web code, it can easily take more time to compile and run the code than it would take to simply run the code in an interpreter. So it only pays to compile hot code (code that is executed many times). In TraceMonkey, we arrange this by tracing only loops. TraceMonkey initially runs everything in the interpreter, and starts recording traces through a loop once it gets hot (runs more than a few times).

    Tracing only hot loops has an important consequence: code that runs only a few times won’t speed up in TraceMonkey. Note that this usually doesn’t matter in practice, because code that runs only a few times usually runs too fast to be noticeable. Another consequence is that paths through a loop that are not taken at all never need to be compiled, saving compile time.

    Finally, above we said that TraceMonkey figures out the types of values by observing execution, but as we all know, past performance does not guarantee future results: the types might be different the next time the code is run, or the 500th next time. And if we try to run code that was compiled for numbers when the values are actually strings, very bad things will happen. So TraceMonkey must insert type checks into the compiled code. If a check doesn’t pass, TraceMonkey must leave the current trace and compile a new trace for the new types. This means that code with many branches or type changes tends to run a little slower in TraceMonkey, because it takes time to compile the extra traces and jump from one to another.

    TraceMonkey in action

    Now, we’ll show tracing in action by example on this sample program, which adds the first N whole numbers to a starting value:

     function addTo(a, n) {
       for (var i = 0; i < n; ++i)
         a = a + i;
       return a;
     var t0 = new Date();
     var n = addTo(0, 10000000);
     print(new Date() - t0);

    TraceMonkey always starts running the program in the interpreter. Every time the program starts a loop iteration, TraceMonkey briefly enters monitoring mode to increment a counter for that loop. In FF3.5, when the counter reaches 2, the loop is considered hot and it’s time to trace.

    Now TraceMonkey continues running in the interpreter but starts recording a trace as the code runs. The trace is simply the code that runs up to the end of the loop, along with the types used. The types are determined by looking at the actual values. In our example, the loop executes this sequence of JavaScript statements, which becomes our trace:

        a = a + i;    // a is an integer number (0 before, 1 after)
        ++i;          // i is an integer number (1 before, 2 after)
        if (!(i < n)) // n is an integer number (10000000)

    That’s what the trace looks like in a JavaScript-like notation. But TraceMonkey needs more information in order to compile the trace. The real trace looks more like this:

        ++i;            // i is an integer number (0 before, 1 after)
        temp = a + i;   // a is an integer number (1 before, 2 after)
        if (lastOperationOverflowed())
        a = temp;
        if (!(i < n))   // n is an integer number (10000000)
        goto trace_1_start;

    This trace represents a loop, and it should be compiled as a loop, so we express that directly using a goto. Also, integer addition can overflow, which requires special handling (for example, redoing with floating-point addition), which in turn requires exiting the trace. So the trace must include an overflow check. Finally, the trace exits in the same way if the loop condition is false. The exit codes tell TraceMonkey why the trace was exited, so that TraceMonkey can decide what to do next (such as whether to redo the add or exit the loop). Note that traces are recorded in a special internal format that is never exposed to the programmer — the notation used above is just for expository purposes.

    After recording, the trace is ready to be compiled to type-specialized machine code. This compilation is performed by a tiny JIT compiler (named, appropriately enough, nanojit) and the results are stored in memory, ready to be executed by the CPU.

    The next time the interpreter passes the loop header, TraceMonkey will start executing the compiled trace. The program now runs very fast.

    On iteration 65537, the integer addition will overflow. (2147450880 + 65537 = 2147516417, which is greater than 2^31-1 = 2147483647, the largest signed 32-bit integer.) At this point, the trace exits with an OVERFLOWED code. Seeing this, TraceMonkey will return to interpreter mode and redo the addition. Because the interpreter does everything generically, the addition overflow is handled and everything works as normal. TraceMonkey will also start monitoring this exit point, and if the overflow exit point ever becomes hot, a new trace will be started from that point.

    But in this particular program, what happens instead is that the program passes the loop header again. TraceMonkey knows it has a trace for this point, but TraceMonkey also knows it can’t use that trace because that trace was for integer values, but a is now in a floating-point format. So TraceMonkey records a new trace:

        ++i;            // i is an integer number
        a = a + i;      // a is a floating-point number
        if (!(i < n))   // n is an integer number (10000000)
        goto trace_2_start;

    TraceMonkey then compiles the new trace, and on the next loop iteration, starts executing it. In this way, TraceMonkey keeps the JavaScript running as machine code, even when types change. Eventually the trace will exit with a BRANCHED code. At this point, TraceMonkey will return to the interpreter, which takes over and finishes running the program.

    Here are the run times for this program on my laptop (2.2GHz MacBook Pro):

    System Run Time (ms)
    SpiderMonkey (FF3) 990
    TraceMonkey (FF3.5) 45
    Java (using int) 25
    Java (using double) 74
    C (using int) 5
    C (using double) 15

    This program gets a huge 22x speedup from tracing and runs about as fast as Java! Functions that do simple arithmetic inside loops usually get big speedups from tracing. Many of the bit operation and math SunSpider benchmarks, such bitops-3bit-bits-in-byte, ctrypto-sha1, and math-spectral-norm get 6x-22x speedups.

    Functions that use more complex operations, such as object manipulation, get a smaller speedup, usually 2-5x. This follows mathematically from Amdahl’s law and the fact that complex operations take longer. Looking back at the time diagram, consider a more complex operation that takes time 30T for the green part. The orange and brown parts will still be about 30T together, so eliminating them gives a 2x speedup. The SunSpider benchmark string-fasta is an example of this kind of program: most of the time is taken by string operations that have a relatively long time for the green box.

    Here is a version of our example program you can try in the browser:

    Numerical result:

    Run time:

    Average run time:

    Understanding and fixing performance problems

    Our goal is to make TraceMonkey reliably fast enough that you can write your code in the way that best expresses your ideas, without worrying about performance. If TraceMonkey isn’t speeding up your program, we hope you’ll report it as a bug so we can improve the engine. That said, of course, you may need your program to run faster in today’s FF3.5. In this section, we’ll explain some tools and techniques for fixing performance of a program that doesn’t get a good (2x or more) speedup with the tracing JIT enabled. (You can disable the jit by going to about:config and setting the pref javascript.options.jit.content to false.)

    The first step is understanding the cause of the problem. The most common cause is a trace abort, which simply means that TraceMonkey was unable to finish recording a trace, and gave up. The usual result is that the loop containing the abort will run in the interpreter, so you won’t get a speedup on that loop. Sometimes, one path through the loop is traced, but there is an abort on another path, which will cause TraceMonkey to switch back and forth between interpreting and running native code. This can leave you with a reduced speedup, no speedup, or even a slowdown: switching modes takes time, so rapid switching can lead to poor performance.

    With a debug build of the browser or a JS shell (which I build myself – we don’t publish these builds) you can tell TraceMonkey to print information about aborts by setting the TMFLAGS environment variable. I usually do it like this:


    The minimal option prints out all the points where recording starts and where recording successfully finishes. This gives a basic idea of what the tracer is trying to do. The abort option prints out all the points where recording was aborted due to an unsupported construct. (Setting TMFLAGS=help will print the list of other TMFLAGS options and then exit.)

    (Note also that TMFLAGS is the new way to print the debug information. If you are using a debug build of the FF3.5 release, the environment variable setting is TRACEMONKEY=abort.)

    Here’s an example program that doesn’t get much of a speedup in TraceMonkey.

    function runExample2() {
      var t0 = new Date;
      var sum = 0;
      for (var i = 0; i < 100000; ++i) {
        sum += i;
      var prod = 1;
      for (var i = 1; i < 100000; ++i) {
        eval("prod *= i");
      var dt = new Date - t0;
      document.getElementById(example2_time').innerHTML = dt + ' ms';

    Run time:

    If we set TMFLAGS=minimal,abort, we’ll get this:

    Recording starting from ab.js:5@23
    recording completed at  ab.js:5@23 via closeLoop
    Recording starting from ab.js:5@23
    recording completed at  ab.js:5@23 via closeLoop
    Recording starting from ab.js:10@63
    Abort recording of tree ab.js:10@63 at ab.js:11@70: eval.
    Recording starting from ab.js:10@63
    Abort recording of tree ab.js:10@63 at ab.js:11@70: eval.
    Recording starting from ab.js:10@63
    Abort recording of tree ab.js:10@63 at ab.js:11@70: eval.

    The first two pairs of lines show that the first loop, starting at line 5, traced fine. The following lines showed that TraceMonkey started tracing the loop on line 10, but failed each time because of an eval.

    An important note about this debug output is that you will typically see some messages referring to inner trees growing, stabilizing, and so on. These really aren’t problems: they usually just indicate a delay in finishing tracing a loop because of the way TraceMonkey links inner and outer loops. And in fact, if you look further down the output after such aborts, you will usually see that the loops eventually do trace.

    Otherwise, aborts are mainly caused by JavaScript constructs that are not yet supported by tracing. The trace recording process is easier to implement for a basic operation like + than it is for an advanced feature like arguments. We didn’t have time to do robust, secure tracing of every last JavaScript feature in time for the FF3.5 release, so some of the more advanced ones, like arguments, aren’t traced in FF3.5.0. Other advanced features that are not traced include getters and setters, with, and eval. There is partial support for closures, depending on exactly how they are used. Refactoring to avoid these constructs can help performance.

    Two particularly important JavaScript features that are not traced are:

    • Recursion. TraceMonkey doesn’t see repetition that occurs through recursion as a loop, so it doesn’t try to trace it. Refactoring to use explicit for or while loops will generally give better performance.
    • Getting or setting a DOM property. (DOM method calls are fine.) Avoiding these constructs is generally impossible, but refactoring the code to move DOM property access out of hot loops and performance-critical segments should help.

    We are actively working on tracing all the features named above. For example, support for tracing arguments is already available in nightly builds.

    Here is the slow example program refactored to avoid eval. Of course, I could have simply done the multiplication inline. Instead, I used a function created by eval because that’s a more general way of refactoring an eval. Note that the eval still can’t be traced, but it only runs once so it doesn’t matter.

    function runExample3() {
      var t0 = new Date;
      var sum = 0;
      for (var i = 0; i < 100000; ++i) {
        sum += i;
      var prod = 1;
      var mul = eval("(function(i) { return prod * i; })");
      for (var i = 1; i < 100000; ++i) {
        prod = mul(i);
      var dt = new Date - t0;
      document.getElementById('example3_time').innerHTML = dt + ' ms';

    Run time:

    There are a few more esoteric situations that can also hurt tracing performance. One of them is trace explosion, which happens when a loop has many paths through it. Consider a loop with 10 if statements in a row: the loop has 1024 paths, potentially causing 1024 traces to be recorded. That would use up too much memory, so TraceMonkey caps each loop at 32 traces. If the loop has fewer than 32 hot traces, it will perform well. But if each path occurs with equal frequency, then only 3% of the paths are traced, and performance will suffer.

    This kind of problem is best analyzed with TraceVis, which creates visualizations of TraceMonkey performance. Currently, the build system only supports enabling TraceVis for shell builds, but the basic system can also run in the browser, and there is ongoing work to enable TraceVis in a convenient form in the browser.

    The blog post on TraceVis is currently the most detailed explanation of what the diagrams mean and how to use them to diagnose performance problems. The post also contains a detailed analysis of a diagram that is helpful in understanding how TraceMonkey works in general.

    Comparative JITerature

    Here I will give a few comparisons to other JavaScript JIT designs. I’ll focus more on hypothetical designs than competing engines, because I don’t know details about them — I’ve read the release information and skimmed a few bits of code. Another big caveat is that real-world performance depends at least as much on engineering details as it does on engine architecture.

    One design option could be a called a per-method non-specializing JIT. By this, I mean a JIT compiler that compiles a method at a time and generates generic code, just like what the interpreter does. Thus, the brown boxes from our diagrams are cut out. This kind of JIT doesn’t need to take time to record and compile traces, but it also does not type-specialize, so the orange boxes remain. Such an engine can still be made pretty fast by carefully designing and optimizing the orange box code. But the orange box can’t be completely eliminated in this design, so the maximum performance on numeric programs won’t be as good as a type-specializing engine.

    As far as I can tell, as of this writing Nitro and V8 are both lightweight non-specializing JITs. (I’m told that V8 does try to guess a few types by looking at the source code (such as guessing that a is an integer in a >> 2) in order to do a bit of type specialization.) It seem that TraceMonkey is generally faster on numeric benchmarks, as predicted above. But TraceMonkey suffers a bit on benchmarks that use more objects, because our object operations and memory management haven’t been optimized as heavily.

    A further development of the basic JIT is the per-method type-specializing JIT. This kind of a JIT tries to type-specialize a method based on the argument types the method is called with. Like TraceMonkey, this requires some runtime observation: the basic design checks the argument types each time a method is called, and if those types have not been seen before, compiles a new version of the method. Also like TraceMonkey, this design can heavily specialize code and remove both the brown and orange boxes.

    I’m not aware that anyone has deployed a per-method type-specializing JIT for JavaScript, but I wouldn’t be surprised if people are working on it.

    The main disadvantage of a per-method type-specializing JIT compared to a tracing JIT is that the basic per-method JIT only directly observes the input types to a method. It must try to infer types for variables inside the method algorithmically, which is difficult for JavaScript, especially if the method reads object properties. Thus, I would expect that a per-method type-specializing JIT would have to use generic operations for some parts of the method. The main advantage of the per-method design is that the method needs to be compiled exactly once per set of input types, so it’s not vulnerable to trace explosion. In turn, I think a per-method JIT would tend to be faster on methods that have many paths, and a tracing JIT would tend to be faster on highly type-specializable methods, especially if the method also reads a lot of values from properties.


    By now, hopefully you have a good idea of what makes JavaScript engines fast, how TraceMonkey works, and how to analyze and fix some performance issues that may occur running JavaScript under TraceMonkey. Please report bugs if you run into any significant performance problems. Bug reports are also a good place for us to give additional tuning advice. Finally, we’re trying to improve constantly, so check out nightly TraceMonkey builds if you’re into the bleeding edge.

  10. Firefox 4: the HTML5 parser – inline SVG, speed and more

    This is a guest post from Henri Sivonen, who has been working on Firefox’s new HTML5 parser. The HTML parser is one of the most complicated and sensitive pieces of a browser. It controls how your HTML source is turned into web pages and as such changes to it are rare and need to be well-tested. While most of Gecko has been rebuilt since its initial inception in the late 90s, the parser was one of the stand-outs as being “original.” This replaces that code with a new parser that’s faster, compliant with the new HTML5 standard and enables a lot of new functionality as well.

    A project to replace Gecko’s old HTML parser, dating from 1998, has been ongoing for some time now. The parser was just turned on by default on the trunk, so you can now try it out by simply downloading a nightly build without having to flip any configuration switch.

    There are four main things that improve with the new HTML5 parser:

    • You can now use SVG and MathML inline in HTML5 pages, without XML namespaces.
    • Parsing is now done off Firefox’s main UI thread, improving overall browser responsiveness.
    • It’s improved the speed of innerHTML calls by about 20%.
    • With the landing of the new parser we’ve fixed dozens of long-standing parser related bugs.

    Try the demo with a Firefox Nightly or another HTML5-ready browser. It should look like this:

    What Is It?

    The HTML5 parser in Gecko turns a stream of bytes into a DOM tree according to the HTML5 parsing algorithm.

    HTML5 is the first specification that tells implementors, in detail, how parse HTML. Before HTML5, HTML specifications didn’t say how to turn a stream of bytes into a DOM tree. In theory, HTML before HTML5 was supposed to be defined in terms of SGML. This implied a certain relationship between the source of valid HTML documents and the DOM. However, parsing wasn’t well-defined for invalid documents (and Web content most often isn’t valid HTML4) and there are SGML constructs that were in theory part of HTML but that in reality popular browsers didn’t implement.

    The lack of a proper specification led to browser developers filling in the blanks on their own and reverse engineering the browser with the largest market share (first Mosaic, then Netscape, then IE) when in doubt about how to get compatible behavior. This led to a lot of unwritten common rules but also to different behavior across browsers.

    The HTML5 parsing algorithm standardizes well-defined behavior that browsers and other applications that consume HTML can converge on. By design, the HTML5 parsing algorithm is suitable for processing existing HTML content, so applications don’t need to continue maintaining their legacy parsers for legacy content. Concretely, in the trunk nightlies, the HTML5 parser is used for all text/html content.

    How Is It Different?

    The HTML5 parsing algorithm has two major parts: tokenization and tree building. Tokenization is the process of splitting the source stream into tags, text, comments and attributes inside tags. The tree building phase takes the tags and the interleaving text and comments and builds the DOM tree. The tokenization part of the HTML5 parsing algorithm is closer to what Internet Explorer does than what Gecko used to do. Internet Explorer has had the majority market share for a while, so sites have generally been tested not to break when subjected to IE’s tokenizer. The tree building part is close to what WebKit does already. Of the major browser engines WebKit had the most reasonable tree building solution prior to HTML5.

    Furthermore, the new HTML5 parser parses network streams off the main thread. Traditionally, browsers have performed most tasks on the main thread. Radical changes like off-the-main-thread parsing are made possible by the more maintainable code base of the HTML5 parser compared to Gecko’s old HTML parser.

    What’s In It for Web Developers?

    The changes mentioned above are mainly of interest to browser developers. A key feature of the HTML5 parser is that you don’t notice that anything has changed.

    However, there is one big new Web developer-facing feature, too: inline MathML and SVG. HTML5 parsing liberates MathML and SVG from XML and makes them available in the main file format of the Web.

    This means that you can include typographically sophisticated math in your HTML document without having to recast the entire document as XHTML or, more importantly, without having to retrofit the software that powers your site to output well-formed XHTML. For example, you can now include the solution for quadratic equations inline in HTML:


    Likewise, you can include scalable inline art as SVG without having to recast your HTML as XHTML. As screen sized and pixel densities become more varied, making graphics look crisp at all zoom levels becomes more important. Although it has previously been possible to use SVG graphics in HTML documents by reference (using the object element), putting SVG inline is more convenient in some cases. For example, an icon such as a warning sign can now be included inline instead of including it from an external file.

    <svg height=86 width=90 viewBox='5 9 90 86' style='float: right;'>
      <path stroke=#F53F0C stroke-width=10 fill=#F5C60C stroke-linejoin=round d='M 10,90 L 90,90 L 50,14 Z'/>
      <line stroke=black stroke-width=10 stroke-linecap=round x1=50 x2=50 y1=45 y2=75 />

    Make yourself a page that starts with <!DOCTYPE html> and put these two pieces of code in it and it should work with a new nightly.

    In general, if you have a piece of MathML or SVG as XML, you can just copy and paste the XML markup inline into HTML (omitting the XML declaration and the doctype if any). There are two caveats: The markup must not use namespace prefixes for elements (i.e. no svg:svg or math:math) and the namespace prefix for the XLink namespace has to be xlink.

    In the MathML and SVG snippits above you’ll see that the inline MathML and SVG pieces above are more HTML-like and less crufty than merely XML pasted inline. There are no namespace declarations and unnecessary quotes around attribute values have been omitted. The quote omission works, because the tags are tokenized by the HTML5 tokenizer—not by an XML tokenizer. The namespace declaration omission works, because the HTML5 tree builder doesn’t use attributes looking like namespace declarations to assign MathMLness or SVGness to elements. Instead, <svg> establishes a scope of elements that get assigned to the SVG namespace in the DOM and <math> establishes a scope of elements that get assigned to the MathML namespace in the DOM. You’ll also notice that the MathML example uses named character references that previously haven’t been supported in HTML.

    Here’s a quick summary of inline MathML and SVG parsing for Web authors:

    • <svg></svg> is assigned to the SVG namespace in the DOM.
    • <math></math> is assigned to the MathML namespace in the DOM.
    • foreignObject and annotation-xml (an various less important elements) establish a nested HTML scope, so you can nest SVG, MathML and HTML as you’d expect to be able to nest them.
    • The parser case-corrects markup so <SVG VIEWBOX=’0 0 10 10′> works in HTML source.
    • The DOM methods and CSS selectors behave case-sensitively, so you need to write your DOM calls and CSS selectors using the canonical case, which is camelCase for various parts of SVG such as viewBox.
    • The syntax <foo/> opens and immediately closes the foo element if it is a MathML or SVG element (i.e. not an HTML element).
    • Attributes are tokenized the same way they are tokenized in HTML, so you can omit quotes in the same situations where you can omit quotes in HTML (i.e. when the attribute value is not the empty string and does not contain whitespace, ", , `, <, =, or >).
    • Warning: the two above features do not combine well due to the reuse of legacy-compatible HTML tokenization. If you omit quotes on the last attribute value, you must have a space before the closing slash. <circle fill=green /> is OK but <circle fill=red/> is not.
    • Attributes starting with xmlns have absolutely no effect on what namespace elements or attributes end up in, so you don’t need to use attributes starting with xmlns.
    • Attributes in the XLink namespace must use the prefix xlink (e.g. xlink:href).
    • Element names must not have prefixes or colons in them.
    • The content of SVG script elements is tokenized like they are tokenized in XML—not like the content of HTML script elements is tokenized.
    • When an SVG or MathML element is open <![CDATA[]]> sections work the way they do in XML. You can use this to hide text content from older browsers that don’t support SVG or MathML in text/html.
    • The MathML named characters are available for use in named character references everywhere in the document (also in HTML content).
    • To deal with legacy pages where authors have pasted partial SVG fragments into HTML (who knows why) or used a <math> tag for non-MathML purposes, attempts to nest various common HTML elements as children of SVG elements (without foreignObject) will immediately break out of SVG or MathML context. This may make some typos have surprising effects.