HTML5 Articles

Sort by:


  1. Flash-Free Clipboard for the Web

    As part of our effort to grow the Web platform and make it accessible to new devices, we are trying to reduce the Web’s dependence on Flash. As part of that effort, we are standardizing and exposing useful features which are currently only available to Flash to the entirety of the Web platform.

    One of the reasons why many sites still use Flash is because of its copy and cut clipboard APIs. Flash exposes an API for programmatically copying text to the user’s clipboard on a button press. This has been used to implement handy features, such as GitHub’s “clone URL” button. It’s also useful for things such as editor UIs, which want to expose a button for copying to the clipboard, rather than requiring users to use keyboard shortcuts or the context menu.

    Unfortunately, Web APIs haven’t provided the functionality to copy text to the clipboard through JavaScript, which is why visiting GitHub with Flash disabled shows an ugly grey box where the button is supposed to be. Fortunately, we have a solution. The editor APIs provide document.execCommand as an entry point for executing editor commands. The "copy" and cut" commands have previously been disabled for web pages, but with Firefox 41, which is currently in Beta, and slated to move to release in mid-September, it is becoming available to JavaScript within user-action initiated callbacks.

    Using execCommand("cut"/"copy")

    The execCommand("cut"/"copy") API is only available during a user-triggered callback, such as a click. If you try to call it at a different time, execCommand will return false, meaning that the command failed to execute. Running execCommand("cut") will copy the current selection to the clipboard, so let’s go about implementing a basic copy-to-clipboard button.

    // button which we are attaching the event to
    var button = ...;
    // input containing the text we want to copy 
    var input = ...;
    button.addEventListener("click", function(event) {
      // Select the input node's contents;
      // Copy it to the clipboard

    That code will trigger a copy of the text in the input to the clipboard upon the click of the button in Firefox 41 and above. However, you probably want to also handle failure situations, potentially to fallback to another Flash-based approach such as ZeroClipboard, or even just to tell the user that their browser doesn’t support the functionality.

    The execCommand method will return false if the action failed, for example, due to being called outside of a user-initiated callback, but on older versions of Firefox, we would also throw a security exception if you attempted to use the "cut" or "copy" APIs. Thus, if you want to be sure that you capture all failures, make sure to surround the call in a try-catch block, and also interpret an exception as a failure.

    // button which we are attaching the event to
    var button = ...;
    // input containing the text we want to copy
    var input = ...;
    button.addEventListener("click", function(event) {
      event.preventDefault();; // Select the input node's contents
      var succeeded;
      try {
        // Copy it to the clipboard
        succeeded = document.execCommand("copy");
      } catch (e) {
        succeeded = false;
      if (succeeded) {
        // The copy was successful!
      } else {
        // The copy failed :(

    The "cut" API is also exposed to web pages through the same mechanism, so just s/copy/cut/, and you’re all set to go!

    Feature testing

    The editor APIs provide a method document.queryCommandSupported("copy") intended to allow API consumers to determine whether a command is supported by the browser. Unfortunately, in versions of Firefox prior to 41, we returned true from document.queryCommandSupported("copy") even though the web page was unable to actually perform the copy operation. However, attempting to execute document.execCommand("copy") would throw a SecurityException. So, attempting to copy on load, and checking for this exception is probably the easiest way to feature-detect support for document.execCommand("copy") in Firefox.

    var supported = document.queryCommandSupported("copy");
    if (supported) {
      // Check that the browser isn't Firefox pre-41
      try {
      } catch (e) {
        supported = false;
    if (!supported) {
      // Fall back to an alternate approach like ZeroClipboard

    Support in other browsers

    Google Chrome and Internet Explorer both also support this API. Chrome uses the same restriction as Firefox (that it must be run in a user-initiated callback). Internet Explorer allows it to be called at any time, except it first prompts the user with a dialog, asking for permission to access the clipboard.

    For more information about the API and browser support, see MDN documentation for document.execCommand().

  2. Developer Edition 41: View source in a tab, screenshot elements, HAR files, and more

    When we introduced the new Performance tools a few weeks ago, we also talked about how the Firefox Dev Tools team had spent a lot of time focusing on user feedback and what we call ‘polish’ bugs – things reported via our UserVoice feedback channel and Bugzilla. Even though the Firefox 41 was a short release cycle for us, this focus on user feedback continues to pay off — several new features that our community had been asking for landed in time for the release. Here’s a closer look:

    Screenshot the selected node in the Inspector

    New contributor Léon McGregor implemented an interesting suggestion that was posted in UserVoice. This functionality has been available via the gcli ‘screenshot’ command for quite some time, but is much more discoverable and useful as a context menu item. When the screenshot is created, Firefox copies it to your configured downloads directory.

    Create a screenshot of the currently selected element.

    Create a screenshot of the currently selected element.

    View source in tab

    Starting with Firefox 41, when you right-click and select View Page Source, the html source view will open in a tab instead of a new window. This was a hugely popular request and we would have shipped it earlier but what started out as a seemingly simple change was actually quite involved: See the bug linked below for all the gory details. Importantly, we have also ensured that View Page Source provides you with the source of the page as-is from Firefox’s cache – we do not fetch a new version.

    'View source' always opens in a tab now.

    View Page Source always opens in a tab now.

    Add Rules button

    It’s very convenient to be able to add a new rule to the Inspector as you work, and this is a feature from Firebug that users have requested for some time. During this last cycle, we spent some time polishing our implementation, and provided the convenience of a UI button in addition to the context menu command.

    We've added a button to the inspector so you can quickly add a new css rule.

    We’ve added a button to the inspector so you can quickly add a new css rule.

    “Copy as HAR” and “Save all as HAR”

    Another feature from Firebug that is particularly popular with Selenium users is the ability to export HAR archives for the current page.

    You can now export HAR archives directly from the network monitor.

    You can now export HAR archives directly from the network monitor.

    Other notable changes

    In total, 140 Developer Tools bugs have been fixed in Firefox since June 1st. On behalf of the team, I’d like to thank all of the people who reported bugs, tested patches, and spent many hours working to improve this version of Firefox Developer Edition, and especially these contributors that fixed bugs: edoardo.putti, fayolle-florent, 15electronicmotor, veeti.paananen, sr71pav, sjakthol, ntim, MattN, lemcgregor3, and indiasuny000. Thanks!.

    • Bug 1164210 – $$() should return a true Array
    • Bug 1077339 – Display keyboard shortcuts when hovering panel tabs
    • Bug 1163183 – Show HTML5 Forms pseudo elements in the rule view
    • Bug 1165576 – Netmonitor theme refresh
    • Bug 1049888 – Make the storage actor work in e10s and Firefox OS
    • Bug 987365 – Add pseudo-class lock options to rule view
    • Bug 1059882 – Frame selection command button should be visible by default
    • Bug 1143224 – Opening the netmonitor slows down requests on the page
    • Bug 1119133 – Keyboard shortcut to toggle devtools docking mode between last two positions
    • Bug 1024693 – Copy CSS declarations
    • Bug 1050691 – Click on a function on the console should go to the debugger

    Download Firefox Developer Edition 41 now. Let us know what you think and what you’d like to see in future releases. We’re paying attention.

  3. Build an HTML5 game—and distribute it

    Last year, Mozilla and Humble Bundle brought great indie titles like FTL: Faster Than Light, Voxatron, and others to the Web through the Humble Mozilla Bundle promotion.  This year we plan to go even bigger with developments in JavaScript such as support for SIMD and SharedArrayBuffer.  Gaming on the Web without plugins is great; the user doesn’t have to install anything they don’t want, and if they love the game, they can share a link on their social media platform du jour.  Imagine the kind of viral multiplayer networking possibilities!

    Lately, I’ve been focusing on real-time rendering with WebGL and while it is quite powerful, I wanted to take a step back and look at the development of game logic and explore various distribution channels.  I’ve read a few books on game development, but I most recently finished Build an HTML5 Game by Karl Bunyan and thought I’d share my thoughts on it.  Later, we’ll take a look at some alternative ways other than links to share and distribute HTML5 games.

    Book review: Build an HTML5 Game

    Build an HTML5 Game (BHG) is meant for developers who have programmed before; have written HTML, CSS, and JavaScript; know how to host their code from a local server; and are looking to make a 2D casual game. The layout presents a good logical progression of ideas, starting small, and building from work done in previous chapters. The author makes the point that advanced 3D visuals are not discussed in this book (as in WebGL). Also, the design of familiar genres of game play mechanics are avoided. Instead, the focus is on learning how to use HTML5 and CSS3 to replace Flash for the purpose of casual game development. The game created throughout the chapters is a bubble shooter (like the Bust A Move franchise). Source code, a demo of the final game itself, and solutions to further practice examples can be found online at:


    I would recommend this book to any junior programmer who has written some code before, but maybe has trouble breaking up code in to logical modules with a clean separation of concerns.  For example, early on in my programming career, I suffered from “monolithic file” syndrome.  It was not clear to me when it made sense to split up programs across multiple files and even when to use multiple classes as is typical in object-oriented paradigms.  I would also recommend this book to anyone who has yet to implement their own game.

    The author does a great job breaking up the workings of an actual playable game into the Model-View-Controller (MVC) pattern.  Also, it’s full of source code, with clearly recognizable diffs of what was added or removed from previous examples.  If you’re like me and like to follow along writing the code from technical books while reading them, the author of this book has done a fantastic job making it easy to do.

    The book is a great reference. It exposes a developer new to web technologies to the numerous APIs, does a good job explaining when such technologies are useful, and accurately weighs pros and cons of different approaches.  There’s a small amount of trigonometry and collision detection covered; two important ideas that are used frequently in game development.

    Another key concept that’s important to web development in general is graceful degradation.  The author shows how Modernizr is used for detecting feature support, and you even implement multiple renderers: a canvas one for more modern browsers, with a fallback renderer that animates elements in the DOM.  The idea of multiple renderers is an important concept in game development; it helps with the separation of concerns (separating rendering logic from updating the game state in particular); exposes you to more than one way of doing things; and helps you target multiple platforms (or in this case, older browsers). Bunyan’s example in this particular case is well designed.


    Many web APIs are covered either in the game itself or mentioned for the reader to pursue.  Some of the APIs, patterns, and libraries include: Modernizr, jQuery, CSS3 transitions/animations/transforms, Canvas 2D, audio tags, Sprite Atlases, DOM manipulation, localStorage, requestAnimationFrame, AJAX, WebSockets, Web Workers, WebGL, requestFullScreen, Touch Events, meta viewport tags, developer tools, security, obfuscation, “don’t trust the client” game model, and (the unfortunate world of) vendor prefixes.

    The one part of the book I thought could be improved was the author’s extensive use of absolute CSS positioning.  This practice makes the resulting game very difficult to port to mobile screen resolutions. Lots of the layout code and the collision detection algorithm assume exact widths in pixels as opposed to using percentages or newer layout modes and measuring effective widths at run time.

    Options for distributing your game

    Now let’s say we’ve created a game, following the content of Build an HTML5 Game, and we want to distribute it to users as an app.  Personally, I experience some form of cognitive dissonance here; native apps are frequently distributed through content silos, but if that’s the storefront where money is to be made then developers are absolutely right to use app stores as a primary distribution channel.

    Also, I get frequent questions from developers who have previously developed for Android or iOS looking to target Firefox OS. They ask,“Where does my binary go?” — which is a bit of a head-scratcher to someone who’s familiar with standing up their own web server or using a hosting provider.  For instance, one of the more well known online storefronts for games, Steam, does not even mention HTML5 game submissions!

    A choice of runtimes

    I’d like to take a look at two possible ways of “packaging” up HTML5 games (or even applications) for distribution: Mozilla’s Web Runtime and Electron.

    Mozilla’s Web Runtime allows a developer with no knowledge of platform/OS specific APIs to develop an application 100% in HTML5.  No need to learn anything platform specific about how windows are created, how events are handled, or how rendering occurs.  There’s no IDE you’re forced to use, and no build step.  Unlike Cordova, you’re not writing into a framework, it’s just the Web.  The only addition you need is an App Manifest, which is in the standards body within the W3C currently.

    An example manifest from my IRC app:

      "name": "Firesea IRC",
      "version": "1.0.13",
      "developer": {
        "name": "Mozilla Partner Engineering",
        "url": ""
      "description": "An IRC client",
      "launch_path": "/index.html",
      "permissions": {
        "tcp-socket": {
          "description": "tcp"
        "desktop-notification": {
          "description": "privMSGs and mentions"
      "icons": {
        "128": "/images/128.png"
      "type": "privileged"

    Applications developed with Mozilla’s Web Runtime can be distributed as links to the manifest to be installed with a snippet of JavaScript called “hosted apps,” or links to assets archived in a zip file called “packaged apps.”  Mozilla will even host the applications for you in, though you are free to host your apps yourself.  Google is also implementing the W3C manifest spec, though there are a few subtleties between implementations, currently, such as having a launcher rather than a desktop icon.

    Here’s the snippet of JavaScript used to install a hosted app:

    var request = window.navigator.mozApps.install(manifestUrl);
    request.onsuccess = function () {
    request.onerror = function () {

    A newer io.js (Node.js fork)-based project is Electron, formerly known as Atom Shell and used to build projects like the Atom code editor from GitHub and the Visual Studio Code from MicrosoftElectron allows for more flexibility in application development; the application is split into two processes that can post messages back and forth.  One is the browser or content process, which uses the Blink rendering engine (from Chromium/Chrome), and the main process which is io.js.  All of your favorite Node.js modules can thus be used with Electron.  Electron is based off of NW.js (formerly node-webkit, yo dawg, heard you like forks) with a few subtleties of its own.


    Once installed, Mozilla’s Web Runtime will link against code from an installed version of Firefox and load the corresponding assets.  There’s a potential tradeoff here.  Electron currently ships an entire rendering engine for each and every app; all of Blink.  This is potentially ~40MB, even if your actual assets are significantly smaller.  Web Runtime apps will link against Firefox if it’s installed, otherwise will prompt the user to install Firefox to have the appropriate runtime.  This cuts down significantly on the size of the content to be distributed at the cost of expecting the runtime to already be installed, which may or may not be the case.  Web Runtime apps can only be installed in Firefox or Chromium/Blink, which isn’t ideal, but it’s the best we can do until browser vendors agree on and implement the standard.  It would be nice to allow the user to pick which browser/rendering engine/environment to run the app in as well.

    While I’m a big fan of the Node.js ecosystem, I’m also a big fan of the strong guarantees of security provided by the browser.  For instance, I personally don’t trust most applications distributed as executables.  Call me paranoid, but I’d really prefer if applications didn’t have access to my filesystem, and only had permission to make network requests to the host I navigated to.  By communicating with Node.js, you bypass the strong guarantees provided by browser vendors.  For example, browser vendors have created the Content Security Policy (CSP) as a means of shutting down a few Cross Site Scripting (XSS) attack vectors.  If an app is built with Electron and accesses your file system, hopefully the developer has done a good job sanitizing their inputs!

    On the other side of the coin, we can do some really neat stuff with Electron.  For example, some of the newer Browser APIs developed in Gecko and available in Firefox and Firefox OS are not yet implemented in other rendering engines.  Using Electron and its message-passing interface, it’s actually possible to polyfill these APIs and use them directly, though security is still an issue.  Thus it’s possible to more nimbly implement APIs that other browser vendors haven’t agreed upon yet.  Being able to gracefully fallback to the host API (rather than the polyfill) in the event of an update is important; let’s talk about updates next.

    Managing updates

    Updating software is a critical part of security.  While browsers can offer stronger security guarantees, they’re not infallible.  “Zero days” exist for all major browsers, and if you had the resources you could find or even buy knowledge of one.  When it comes to updating applications, I think Mozilla’s Web Runtime has a stronger story: app assets are fetched every time, but defer to the usual asset caching strategy while the rendering engine is linked in.  Because Firefox defaults to auto updating, most users should have an up-to-date rendering engine (though there are instances where this might not be the case).  The runtime should check for updates for packaged apps daily, and updates work well.  For Electron, I’m not sure that the update policy for apps is built in.  The high value of exploits for widely installed software like rendering engines worries me a bit here.

    Apps for Mozilla’s Runtime work currently anywhere where Firefox for desktop or mobile does: Windows, OS X, Linux, Android, or Firefox OS.  Electron supports desktop platforms like Windows, OS X, or Linux.  Sadly, neither option currently supports iOS devices.  I do like that Electron allows you to generate actual standalone apps, and it looks like tools to generate the expected .msi or .dmg files are in the works.

    Microsoft’s manifold.js might be able to bridge the gap to all these different platforms and more.  Though I ran into a few road bumps while trying it out, I would be willing to give it another look.  For me, one potentially problematic issue is requiring developers to generate builds for specific platforms.  Google’s Native Client (NaCl) had this issue where developers would not build their applications for ABIs (application binary interfaces) they did not possess hardware to test for, and thus did not generate builds of their apps for them.  If we want web apps to truly run everywhere, having separate build steps for each platform is not going to cut it; and this to me is where Mozilla’s Web Runtime really shines. Go see for yourself.

    In conclusion

    If I missed anything or made any mistakes in regards to any of the technologies in this article, please let me know via comments to this post.  I’m more than happy to correct any errata. More than anything, I do not want to spread fear, uncertainty, or doubt (FUD) about any of these technologies.

    I’m super excited for the potential each of these approaches hold, and I enjoy exploring some of the subtleties I’ve observed between them.  In the end, I’m rooting for the Web, and I’m overjoyed to see lots of competition and ideation in this space, each approach with its own list of pros and cons.  What pros and cons do you see, and how do you think we can improve?  Share your (constructive) thoughts and opinions in the comments below.

  4. Power Surge – optimize the JavaScript in this HTML5 game using Firefox Developer Edition

    Power Surge!

    The Firefox Developer Tools team wanted to find a fun way to show off the great performance tools we’ve just added to the Firefox Developer Edition browser. We partnered with Przemysław Sikorski (aka rezoner) author of Playground.js and the arcade puzzle game QbQbQb, to create “Power Surge,” a fun game which shows off how the new Performance tools can help developers find slow JavaScript code. The special twist is that the more you optimize the code in the game, the more ships and powers you win in gameplay!

    First off, check out this quick screencast from Mozilla engineer Harald Kirschner on how Power Surge works:

    To get going with Power Surge:

    We’d really like to hear back from you about Power Surge! Tweet to us at @FirefoxDevtools and @mozhacks to share your solutions.

  5. Drag Elements, Console History, and more – Firefox Developer Edition 39

    Quite a few big new features, improvements, and bug fixes made their way into Firefox Developer Edition 39. Update your Firefox Developer Edition, or Nightly builds to try them out!


    The Inspector now allows you to move elements around via drag and drop. Click and hold on an element and then drag it to where you want it to go. This feature was added by contributor Mahdi Dibaiee.

    Back in Firefox 33, a tooltip was added to the rule view to allow editing curves for cubic bezier CSS animations. In Developer Edition 39, we’ve greatly enhanced the tooltip’s UX by adding various standard curves you can try right away, as well as cleaning up the overall appearance. This enhancement was added by new contributor John Giannakos.


    The CSS animations panel we debuted in Developer Edition 37 now includes a time machine. You can rewind, fast forward, and set the current time of your animations.


    Previously, when the DevTools console closed, your past Console history was lost. Now, Console history is persisted across sessions. The recent commands you’ve entered will remain accessible in the next toolbox you open, whether it’s in another tab or after restarting Firefox. Additionally, we’ve added a clearHistory console command to reset the stored list of commands.

    The shorthand $_ has been added as an alias for the last result evaluated in the Console. If you evaluated an expression without storing the result to a variable (for example), you can use this as a quick way to grab the last result.

    We now format pseudo-array-like objects as if they were arrays in the Console output. This makes a pseudo-array-like object easier to reason about and inspect, just like a real array. This feature was added by contributor Johan K. Jensen.


    WebIDE and Mobile

    WiFi debugging for Firefox OS has landed. WiFi debugging allows WebIDE to connect to your Firefox OS device via your local WiFi network instead of a USB cable. We’ll discuss this feature in more detail in a future post.

    WebIDE gained support for Cordova-based projects. If you’re working on a mobile app project using Cordova, WebIDE now knows how to build the project for devices it supports without any extra configuration.

    Other Changes

    • Attribute changes only flash the changed attribute in the Markup View, instead of the whole element.
    • Canvas Debugger now supports setTimeout for animations.
    • Inline box model highlighting.
    • Browser Toolbox can now be opened from a shortcut: Cmd-Opt-Shift-I / Ctrl-Alt-Shift-I.
    • Network Monitor now shows the remote server’s IP address and port.
    • When an element’s highlighted in the Inspector, you can now use the arrow keys to highlight the current element’s parent (left key), or its first child, or its next sibling if it has no children, or the next node in the tree if it has no siblings (right key). This is especially useful when an element and its parent occupy the same space on the screen, making it difficult to select one of them using only the mouse.

    For an even more complete list, check out all 200 bugs resolved during the Firefox 39 development cycle.

    Thanks to all the new developers who made their first DevTools contribution this release:

    • Anush
    • Brandon Max
    • Geoffroy Planquart
    • Johan K. Jensen
    • John Giannakos
    • Mahdi Dibaiee
    • Nounours Heureux
    • Wickie Lee
    • Willian Gustavo Veiga

    Do you have feedback, bug reports, feature requests, or questions? As always, you can comment here, add/vote for ideas on UserVoice, or get in touch with the team at @FirefoxDevTools on Twitter.

  6. Ruby support in Firefox Developer Edition 38

    It was a long-time request from East Asian users, especially Japanese users, to have ruby support in the browser.

    Formerly, because of the lack of native ruby support in Firefox, users had to install add-ons like HTML Ruby to make ruby work. However, in Firefox Developer Edition 38, CSS Ruby has been enabled by default, which also brings the support of HTML5 ruby tags.


    What is ruby? In short, ruby is an extra text, which is usually small, attached to the main text for indicating the pronunciation or meaning of the corresponding characters. This kind of annotation is widely used in Japanese publications. It is also common in Chinese for books for children, educational publications, and dictionaries.

    Ruby Annotation

    Basic Usage

    Basically, the ruby support consists of four main tags: <ruby>, <rb>, <rt>, and <rp>. <ruby> is the tag that wraps the whole ruby structure, <rb> is used to mark the text in the normal line, <rt> is for the annotation, and <rp> is a tag which is hidden by default. With the four tags, the result above can be achieved from the following code:


    Since <rb> and <rt> can be auto-closed by themselves, we don’t bother to add code to close those tags manually.

    As shown in the image, the duplicate parts as well as <rp>s are hidden automatically. But why should we add content which is hidden by default?

    The answer is: it is more natural in semantics, and it helps conversion to the inline form, which is a more general form accepted by more software. For example, this allows the page to have a decent effect on a browser with no ruby support. It also enables the user agent to generate well-formed plain text when you want to copy text with ruby (though this feature hasn’t yet been landed on Firefox).

    In addition, the extra content makes it possible to provide inline style of the annotation without changing the document. You would only need to add a rule to your stylesheet:

    ruby, rb, rt, rp {
      display: inline;
      font-size: inherit;

    Actually, if you don’t have those requirements, only <ruby> and <rt> are necessary. For simplest cases, e.g. a single ideographic character, you can use code like:


    Advanced Support

    Aside from the basic usage of ruby, Firefox now provides support for more advanced cases.

    By default, if the width of an annotation does not match its base text, the shorter text will be justified as shown in the example above. However, this behavior can be controlled via the ruby-align property. Aside from the default value (space-around), it can also make the content align to both sides (space-between), centered (center), or aligned to the start side (start).

    Multiple levels of annotations are also supported via tag <rtc> which is the container of <rt>s. Every <rtc> represents one level of annotation, but if you leave out the <rtc>, the browser will do some cleanup to wrap consecutive <rt>s in a single anonymous <rtc>, forming one level.

    For example, we can extend the example above to:


    If you do not put any <rt> inside a <rtc>, this annotation will become a span across the whole base:

      <rtc lang="en">A Certain Scientific Railgun</rtc><rp></rp>

    You can use ruby-position to place the given level of annotation on the side you want. For the examples above, if you want to put the second level under the main line, you can apply ruby-position: under; to the <rtc> tag. Currently, only under and the default value over is supported.

    (Note: The CSS Working Group is considering a change to the default value of ruby-position, so that annotations become double-sided by default. This change is likely to happen in a future version of Firefox.)

    In the end, an advanced example of ruby combining everything introduced above:

    Advanced Example for Ruby

    rtc:lang(en) {
      ruby-position: under;
      ruby-align: center;
      font-size: 75%;
      <rtc lang="en">A Certain Scientific Railgun</rtc><rp></rp>

    Got questions, comments, feedback on the implementation? Don’t hesitate to share your thoughts or issues here or via bugzilla.

  7. Pseudo elements, promise inspection, raw headers, and much more – Firefox Developer Edition 36

    Firefox 36 was just uplifted to the Developer Edition channel, so let’s take a look at the most important Developer Tools changes in this release. We will also cover some changes from Firefox 35 since it was released shortly before the initial Developer Edition announcement. There is a lot to talk about, so let’s get right to it.


    You can now inspect ::before and ::after pseudo elements.  They behave like other elements in the markup tree and inspector sidebars. (35, development notes)


    There is a new “Show DOM Properties” context menu item in the markup tree. (35, development notes, documentation about this feature on MDN)


    The box model highlighter now works on remote targets, so there is a full-featured highlighter even when working with pages on Firefox for Android or apps on Firefox OS. (36, development notes, and technical documentation for building your own custom highlighter)

    Shadow DOM content is now visible in the markup tree; note that you will need to set dom.webcomponents.enabled to true to test this feature (36, development notes, and see bug 1053898 for more work in this space).

    We borrowed a useful feature from Firebug and are now allowing more paste options when right clicking a node in the markup tree.  (36, development notes, documentation about this feature on MDN)


    Some more changes to the Inspector included in Firefox 35 & 36:

    • Deleting a node now selects the previous sibling instead of the parent (36, development notes)
    • There is higher contrast for the currently hovered node in the markup view (36, development notes)
    • Hover over a CSS selector in the computed view to highlight all the nodes that match that selector on the page. (35, development notes)

    Debugger / Console

    DOM Promises are now inspectable. You can inspect the promises state and value at any moment. (36, development notes)


    The debugger now recognizes and works with eval’ed sources. (36, development notes)

    Eval’ed sources support the //# sourceURL=path.js syntax, which will make them appear as a normal file in the debugger and in stack traces reported by Error.prototype.stack. See this post: for much more information. (36, development notes,  more development notes)

    Console statements now include the column number they originated from. (36, development notes)


    You are now able to connect to Firefox for Android from the WebIDE.  See documentation for debugging firefox for android on MDN.  (36, development notes).

    We also made some changes to improve the user experience in the WebIDE:

    • Bring up developer tools by default when I select a runtime app / tab (35, development notes)
    • Re-select project on connect if last project is runtime app (35, development notes)
    • Auto-select and connect to last used runtime if available (35, development notes)
    • Font resizing (36, development notes)
    • You can now adding a hosted app project by entering the base URL (eg: “”) instead of requiring the full path to the manifest.webapp file (35, development notes)

    Network Monitor

    We added a plain request/response headers view to make it easier to view and copy the raw headers on a request. (35, development notes)


    Here is a list of all the bugs resolved for Firefox 35 and all the bugs resolved for Firefox 36.

    Do you have feedback, bug reports, feature requests, or questions? As always, you can comment here, add/vote for ideas on UserVoice or get in touch with the team at @FirefoxDevTools on Twitter.

  8. Videos and Firefox OS

    Before HTML5

    Those were dark times Harry, dark times – Rubeus Hagrid

    Before HTML5, displaying video on the Web required browser plugins and Flash.

    Luckily, Firefox OS supports HTML5 video so we don’t need to support these older formats.

    Video support on the Web

    Even though modern browsers support HTML5, the video formats they support vary:

    In summary, to support the most browsers with the fewest formats you need the MP4 and WebM video formats (Firefox prefers WebM).

    Multiple sizes

    Now that you have seen what formats you can use, you need to decide on video resolutions, as desktop users on high speed wifi will expect better quality videos than mobile users on 3G.

    At Rormix we decided on 720p for desktop, 360p for mobile connections, and 180p specially for Firefox OS to reduce the cost in countries with high data charges.

    There are no hard and fast rules — it depends on who your market audience is.


    The best streaming solution would be to automatically serve the user different videos sizes depending on their connection status (adaptive streaming) but support for this technology is poor.

    HTTP live streaming works well on Apple devices, but has poor support on Android.

    At the time of writing, the most promising technology is MPEG DASH, which is an international standard.

    In summary, we are going to have to wait before we get an adaptive streaming technology that is widely accepted (Firefox does not support HLS or MPEG DASH).

    DIY Adaptive streaming

    In the absence of adaptive streaming we need to try to work out the best video quality to load at the outset. The following is a quick guide to help you decide:

    Wifi or 3G

    Using a certified Firefox OS app you can check to see if the user is on wifi or not.

    var lock    = navigator.mozSettings.createLock();
    var setting = lock.get('wifi.enabled');
    setting.onsuccess = function () {
      console.log('wifi.enabled: ' + setting.result);
    setting.onerror = function () {
      console.warn('An error occured: ' + setting.error);

    There is some more information at the W3C Device API.

    Detecting screen size

    There is no point sending a 720p video to a user with a screen smaller than 720p. There are many ways to get the different bounds of a user’s screen; innerWidth and width allow you to get a good idea:

    function getVidSize()
      //Get the width of the phone (rotation independent)
      var min = Math.min($(window).innerHeight(),$(window).innerWidth());
      //Return a video size we have
      if(min < 320)      return '180';
      else if(min < 550) return '360';
      else               return '720';

    Determining internet speed

    It is difficult to get an accurate read of a user’s internet speed using web technologies — usually they involve loading a large image onto the user’s device and timing it. This has the disadvantage of having to send more data to the user. Some services such as: exist, but still require data downloads to the user’s device. (Stackoverflow has some more options.)

    You can be slightly more clever by using HTML5, and checking the time it takes between the user starting the video and a set amount of the video loading. This way we do not need to load any extra data on the user’s device. A quick VideoJS example follows:

    var global_speedcount = 0;
    var global_video = null;
    global_video = videojs("video", {}, function(){
    //Set up video sources
      //User has clicked play
      global_speedcount = new Date().getTime();
    function timer()
      var diff = new Date().getTime() - global_speedcount;
      //Remove this handler as it is run multiple times per second!'timeupdate',timer);

    This code starts timing when the user clicks play, and when the browser starts to play the video it sends timing information to timeupdate. You can also use this function to detect if lots of buffering is happening.

    Detect high resolution devices

    One final thing to determine is whether or not a user has a high pixel density screen. In this case even if they have a small screen it can still have a large number of pixels (and therefore require a higher resolution video).

    Modernizr has a plugin for detecting hi-res screens.

    if (Modernizr.highresdisplay)
      alert('Your device has a high resolution screen');

    WebP Thumbnails

    Not to get embroiled in an argument, but at Rormix we have seen an average decrease of 30% in file size (WebP vs JPEG) with no loss of quality (in some cases up to 50% less). And in countries with expensive data plans, the less data the better.

    We encode all of our thumbnails in multiple resolutions of WebP and send them to every device that supports them to reduce the amount of data being sent to the user.

    Mobile considerations

    If you are playing HTML5 videos on mobile devices, their behavior differs. On iOS it automatically goes to full screen on iPhones/iPods, but not on tablets.

    Some libraries such as VideoJS have removed the controls from mobile devices until their stability increases.

    Useful libraries

    There are a few useful HTML5 video libraries:

    Mozilla links

    Mozilla has some great articles on web video:

    Other useful Links

  9. An easier way of using polyfills

    Polyfills are a fantastic way to enable the use of modern code even while supporting legacy browsers, but currently using polyfills is too hard, so at the FT we’ve built a new service to make it easier. We’d like to invite you to use it, and help us improve it.

    Image from

    More pictures, they said. So here’s a unicorn, which is basically a horse with a polyfill.

    The challenge

    Here are some of the issues we are trying to solve:

    • Developers do not necessarily know which features need to be polyfilled. You load your site in some old version of IE beloved by a frustratingly large number of your users, see that the site doesn’t work, and have to debug it to figure out which feature is causing the problem. Sometimes the culprit is obvious, but often not, especially when legacy browsers also lack good developer tools.
    • There are often multiple polyfills available for each feature. It can be hard to know which one most faithfully emulates the missing feature.
    • Some polyfills come as a big bundle with lots of other polyfills that you don’t need, to provide comprehensive coverage of a large feature set, such as ES6. It should not be necessary to ship all of this code to the browser to fix something very simple.
    • Newer browsers don’t need the polyfill, but typically the polyfill is served to all browsers. This reduces performance in modern browsers in order to improve compatibility with legacy ones. We don’t want to make that compromise. We’d rather serve polyfills only to browsers that lack a native implementation of the feature.

    Our solution: polyfills as a service

    To solve these problems, we created the polyfill service. It’s a similar idea to going to an optometrist, having your eyes tested, and getting a pair of glasses perfectly designed to correct your particular vision problem. We are doing the same for browsers. Here’s how it works:

    1. Developers insert a script tag into their page, which loads the polyfill service endpoint.
    2. The service analyses the browser’s user-agent header and a list of requested features (or uses a default list of everything polyfillable) and builds a list of polyfills that are required for this browser
    3. The polyfills are ordered using a graph sort to place them in the right dependency order.
    4. The bundle is minified and served through a CDN (for which we’re very grateful to Fastly for their support)

    Do we really need this solution? Well, consider this: Modernizr is a big grab bag of feature detects, and all sensible use cases benefit from a custom build, but a large proportion of Modernizr users just use the default build, often from or as part of html5boilerplate. Why include Modernizr if you aren’t using its feature detects? Maybe you misunderstand the purpose of the library and just think that Modernizr “fixes stuff”? I have to admit, I did, when I first heard the name, and I was mildly disappointed to find that rather than doing any actual modernising, Modernizr actually just defines modernness.

    The polyfill service, on the other hand, does fix stuff. There’s really nothing wrong with not wanting to spend time gaining intimate knowledge of all the foibles of legacy browsers. Let someone figure it out once, and then we can all benefit from it without needing or wanting to understand the details.

    How to use it

    The simplest use case is:

    <script src="//" async defer></script>

    This includes our default polyfill set. The default set is a manually curated list of features that we think are most essential to modern web development, and where the polyfills are reasonably small and highly accurate. If you want to specify which features you want to polyfill though, go right ahead:

    <!-- Just the Array.from polyfill -->
    <script src="//" async defer></script>
    <!-- The default set, plus the geolocation polyfill -->
    <script src="//,Navigator.prototype.geolocation" async defer></script>

    If it’s important that you have loaded the polyfills before parsing your own code, you can remove the async and defer attributes, or use a script loader (one that doesn’t require any polyfills!).

    Testing and documenting feature support

    This table shows the polyfill service’s effect for a number of key web technologies and a range of popular browsers:

    Polyfill service support grid

    The full list of features we support is shown on our feature matrix. To build this grid we use Sauce Labs’ test automation platform, which runs each polyfill through a barrage of tests in each browser, and documents the results.

    So, er, user-agent sniffing? Really?

    Yes. There are several reasons why UA analysis wins out over feature detection for us:

    • In some cases, we have multiple polyfills for the same feature, because some browsers offer a non-compliant implementation that just needs to be bashed into shape, while others lack any implementation at all. With UA detection you can choose to serve the right variant of the polyfill.
    • With UA detection, the first HTTP request can respond directly with polyfill code. If we used feature detection, the first request would serve feature-detect code, and then a second one would be needed to fetch specific polyfills.

    Almost all websites with significant scale do UA detection. This isn’t to say the stigma attached to it is necessarily bad. It’s easy to write bad UA detect rules, and hard to write good ones. And we’re not ruling out making a way of using the service via feature-detects (in fact there’s an issue in our tracker for it).

    A service for everyone

    The service part of the app is maintained by the FT, and we are working on expanding and improving the tools, documentation, testing and service features all the time. The source is freely available on GitHub so you can easily host it yourself, but we also host an instance of the service on which you can use for free, and our friends at Fastly are providing free CDN distribution and SSL.

    We’ve made a platform. We need the community’s help to populate it. We already serve some of the best polyfills from Jonathan Neal, Mathias Bynens and others, but we’d love to be more comprehensive. Bring your polyfills, improve our tests, and make this a resource that can help move the web forward!

  10. Blend4Web: the Open Source Solution for Online 3D

    Half year ago Blend4Web was first released publicly. In this article I’ll show what Blend4Web is, how it is evolved and and how it can be used for web development.

    What Is Blend4Web?

    In short, Blend4Web is an open source framework for creating 3D web applications. It uses Blender – the popular open source 3D modeling suite – as the primary authoring tool. 3D graphics is rendered by means of WebGL which is also an open standard technology. The two main keywords here – Blender and Web(GL) – explain the purpose of this engine perfectly.

    The full source code of Blend4Web together with some usage examples is available under GPLv3 on GitHub (there is also a commercial licensing option).

    The 3D Web

    On June the 2nd Apple presented their new operating systems – OS X Yosemite and iOS 8 – both featuring WebGL support in their Safari browser. That marked the end of a 5 year cycle during which the WebGL technology has been evolving, starting with the first unstable browser builds (if anybody remembers Firefox 3.7 alpha?). Now, all the major browsers on all desktop and mobile systems support this open standard for rendering 3D graphics, everywhere, without any plugins.

    That was a long and difficult road, along which Blend4Web development has been following WebGL development as a shadow. Broken rendering, tab crashes, security “warnings” from some big guys, unavailability in public browser builds, all sorts of fears, uncertainty and doubts. All this didn’t matter, because we have the opportunity to do 3D graphics (and sound) in browsers!


    The first Blender 2.5x builds appeared in summer 2010. At the time we, the programming geeks, were pushed to learn the basics of 3D modeling by the beautiful Sintel from the open source movie of the same name. After choosing Blender, we could be as independent as possible, with a full open source pipeline organized on a Linux platform. Blender gave us the power to make our own 3D scenes, and later helped to attract talanted artists from its wonderful community to join us.

    Blend4Web Evolution in Demos

    Our demo scenes matured together with the development of Blend4Web. The first one was a quite low-poly and almost non-interactive demo called The Island. It was created in 2011 and polished a bit before the public release. In this demo we introduced our Blender-based pipeline in which all the assets are stored in separate files and are linked into the main file for level design and further exporting (for this reason some of Blend4Web users call it “free Unity Pro”).

    In Fashion Show we developed cloth animation techniques. Some post-processing effects, dynamic reflection and particle systems were added later. After Blend4Web has gone public we summarized these cloth-releated tricks in one of our tutorials.

    The Farm is a huge scene (in the scale of a browser): over 25 hectares of land, buildings, animated animals and foliage. We added some gamedev elements into it, including the ability of first-person walking, interacting with objects, driving a vehicle. The demo features spatial audio (via Web Audio) and physics (via Bullet and Asm.js). The Freedesktop folks tried it as a benchmark while testing the Mesa drivers (and got “massive crashes” :).

    We also tried some visualization and created Nature Morte. In this scene we used carefully crafted textures and materials, as well as post-processing effects to improve realism. However, the technology used for this demo was
    quite simple and old-school, as we had no support for visual shader editing yet.

    Things have changed when Blender’s node materials have become available to our artists. They created over 40 different materials for the Sports Car model: chromed metal, painted metal, glass, rubber, leather etc.

    In our latest release we stepped even further by adding support for the animation control by the user. Now interactivity can be implemented without any coding. In order to demonstrate the new opening possibilities we presented interactive infographic of a light helicopter.

    Among the other possible applications of this simple yet effective tool (called NLA Script) we can list the following: interactive 3D web design, product promotions, learning materials, cartoons with the ability to choose between different story lines, point-and-click games and any other applications previously created with Flash.

    Using Blend4Web

    It is very easy to start using Blend4Web – just download and install the Blender addon as shown in this video tutorial:

    The most wonderful thing is that your Blender scene can be exported into a self-contained HTML file that can be emailed, uploaded to your own website or to a cloud – in short shared however you like. This freedom is a fundamental difference from numerous 3D web publishing services as we don’t lock our users to our technology by any means.

    For those who want to create highly interactive 3D web apps we offer the SDK. Some notable examples of what is possible with the Blend4Web API are demonstrated in our programming tutorials, ranging from web design to games.

    Programming 3D web apps with Blend4Web is not much harder than building average RIAs. Unlike some other WebGL frameworks in the wild we tried to offload all graphics, animation and audio tasks to respective professionals. The programmer just loads the scene…

    var m_data = require("data");
    m_data.load("example.json", load_cb);

    …and then writes the logic which triggers the 3D scene changes that are “hard-coded” by the artists, e.g. plays the animation for the user-selected object:

    var m_scenes = require("scenes");
    var m_anim = require("animation");
    var myobj = m_scenes.pick_object(event.clientX, event.clientY);

    As you can see the APIs are structured in a CommonJS way which we believe is important for creating compact and fast web apps.

    The Future

    There are many possible ways in which the Internet and IT will be going but there is no doubt that the strong and steady development of 3D Web is already here. We expect that more and more users will change their expectations about how web content should look and feel like. We’re gonna help the web developers meet these demands with plans to improve usability and performance and to implement new interesting graphics effects.

    We also follow the development of WebGL 2.0 (thanks Mozilla for your job) and expect to create even more nice things on top of it.

    Stay Tuned

    Read our blog, join us on Twitter, Google+, Facebook and Reddit, watch the demos and tutorials on our YouTube channel, fork Blend4Web at GitHub.