Mozilla

Articles

Sort by:

View:

  1. QuaggaJS – Building a barcode-scanner for the Web

    Have your ever tried to type in a voucher code on your mobile phone or simply enter the number of your membership card into a web form?

    These are just two examples of time-consuming and error-prone tasks which can be avoided by taking advantage of printed barcodes. This is nothing new; many solutions exist for reading barcodes with a regular camera, like zxing, but they require a native platform such as Android or iOS. I wanted a solution which works on the Web, without plugins of any sort, and which even Firefox OS could leverage.

    My general interest in computer vision and web technologies fueled my curiosity whether something like this would be possible. Not just a simple scanner, but a scanner equipped with localization mechanisms to find a barcode in real-time.

    The result is a project called QuaggaJS, which is hosted on GitHub. Take a look at the demo pages to get an idea of what this project is all about.

    Reading Barcodes - teaser_500

    How does it work?

    Simply speaking, the pipeline can be divided into the following three steps:

    1. Reading the image and converting it into a binary representation
    2. Determining the location and rotation of the barcode
    3. Decoding the barcode based on the type EAN, Code128

    The first step requires the source to be either a webcam stream or an image file, which is then converted into gray-scale and stored in a 1D array. After that, the image data is passed along to the locator, which is responsible for finding a barcode-like pattern in the image. And finally, if a pattern is found, the decoder tries to read the barcode and return the result. You can read more about these steps in how barcode localization works in QuaggaJS.

    The real-time challenge

    One of the main challenges was to get the pipeline up to speed and fast enough to be considered as a real-time application. When talking about real-time in image-processing applications, I consider 25 frames per second (FPS) the lower boundary. This means that the entire pipeline has to be completed in at least 40ms.

    The core parts of QuaggaJS are made up of computer vision algorithms which tend to be quite heavy on array access. As I already mentioned, the input image is stored in a 1D array. This is not a regular JavaScript Array, but a Typed Array. Since the image has already been converted to gray-scale in the first step, the range of each pixel’s value is set between 0 and 255. This is why Uint8Arrays are used for all image-related buffers.

    Memory efficiency

    One of the key ways to achieve real-time speed for interactive applications is to create memory efficient code which avoids large GC (garbage collection) pauses. That is why I removed most of the memory allocation calls by simply reusing initially created buffers. However this is only useful for buffers when you know the size up front and when the size does not change over time, as with images.

    Profiling

    When you are curious why a certain part of your application runs too slow, a CPU profile may come in handy.

    Firefox includes some wonderful tools to create CPU profiles for the running JavaScript code. During development, this proved to be viable for pinpointing performance bottlenecks and finding functions which caused the most load on the CPU. The following profile was recorded during a session with a webcam on an Intel Core i7-4600U. (Config: video 640×480, half-sampling barcode-localization)

    alt=

    The profile is zoomed in and shows four subsequent frames. On average, one frame in the pipeline is processed in roughly 20 ms. This can be considered fast enough, even when running on machines having a less powerful CPU, like mobile phones or tablets.

    I marked each step of the pipeline in a different color; green is the first, blue the second and red the third one. The drill-down shows that the localization step consumes most of the time (55.6 %), followed by reading the input stream (28.4 %) and finally by decoding (3.7 %). It is also worth noting that skeletonize is one of the most expensive functions in terms of CPU usage. Because of that, I re-implemented the entire skeletonizing algorithm in asm.js by hand to see whether it could run even faster.

    asm.js

    Asm.js is a highly optimizable subset of JavaScript that can execute at close to native speed. It promises a lot of performance gains when used for compute-intensive tasks (take a look at MASSIVE), like most computer vision algorithms. That’s why I ported the entire skeletonizer module to asm.js. This was a very tedious task, because you are actually not supposed to write asm.js code by hand. Usually asm.js code is generated when it is cross-compiled from C/C++ or other LLVM languages using emscripten. But I did it anyway, just to prove a point.

    The first thing that needs to be sorted out is how to get the image-data into the asm.js module, alongside with parameters like the size of the image. The module is designed to fit right into the existing implementation and therefore incorporates some constraints, like a square image size. However, the skeletonizer is only applied on chunks of the original image, which are all square by definition. Not only is the input-data relevant, but also three temporary buffers are needed during processing (eroded, temp, skeleton).

    In order to cover that, an initial buffer is created, big enough to hold all four images at once. The buffer is shared between the caller and the module. Since we are working with a single buffer, we need to keep a reference to the position of each image. It’s like playing with pointers in C.

    function skeletonize() {
      var subImagePtr = 0,
        erodedImagePtr = 0,
        tempImagePtr = 0,
        skelImagePtr = 0;
     
      erodedImagePtr = imul(size, size) | 0;
      tempImagePtr = (erodedImagePtr + erodedImagePtr) | 0;
      skelImagePtr = (tempImagePtr + erodedImagePtr) | 0;
      // ...
    }

    To get a better understanding of the idea behind the structure of the buffer, compare it with the following illustration:

    Buffer in Skeletonizer

    The buffer in green represents the allocated memory, which is passed in the asm.js module upon creation. This buffer is then divided into four blue blocks, of which each contains the data for the respective image. In order to get a reference to the correct data block, the variables (ending with Ptr) are pointing to that exact position.

    Now that we have set up the buffer, it is time to take a look at the erode function, which is part of the skeletonizer written in vanilla JavaScript:

    function erode(inImageWrapper, outImageWrapper) {
      var v,
        u,
        inImageData = inImageWrapper.data,
        outImageData = outImageWrapper.data,
        height = inImageWrapper.size.y,
        width = inImageWrapper.size.x,
        sum,
        yStart1,
        yStart2,
        xStart1,
        xStart2;
     
      for ( v = 1; v < height - 1; v++) {
        for ( u = 1; u < width - 1; u++) {
          yStart1 = v - 1;
          yStart2 = v + 1;
          xStart1 = u - 1;
          xStart2 = u + 1;
          sum = inImageData[yStart1 * width + xStart1] +
            inImageData[yStart1 * width + xStart2] +
            inImageData[v * width + u] +
            inImageData[yStart2 * width + xStart1] +
            inImageData[yStart2 * width + xStart2];
     
          outImageData[v * width + u] = sum === 5 ? 1 : 0;
        }
      }
    }

    This code was then modified to conform to the asm.js specification.

    "use asm";
     
    // initially creating a view on the buffer (passed in)
    var images = new stdlib.Uint8Array(buffer),
      size = foreign.size | 0;
     
    function erode(inImagePtr, outImagePtr) {
      inImagePtr = inImagePtr | 0;
      outImagePtr = outImagePtr | 0;
     
      var v = 0,
        u = 0,
        sum = 0,
        yStart1 = 0,
        yStart2 = 0,
        xStart1 = 0,
        xStart2 = 0,
        offset = 0;
     
      for ( v = 1; (v | 0) < ((size - 1) | 0); v = (v + 1) | 0) {
        offset = (offset + size) | 0;
        for ( u = 1; (u | 0) < ((size - 1) | 0); u = (u + 1) | 0) {
          yStart1 = (offset - size) | 0;
          yStart2 = (offset + size) | 0;
          xStart1 = (u - 1) | 0;
          xStart2 = (u + 1) | 0;
          sum = ((images[(inImagePtr + yStart1 + xStart1) | 0] | 0) +
            (images[(inImagePtr + yStart1 + xStart2) | 0] | 0) +
            (images[(inImagePtr + offset + u) | 0] | 0) +
            (images[(inImagePtr + yStart2 + xStart1) | 0] | 0) +
            (images[(inImagePtr + yStart2 + xStart2) | 0] | 0)) | 0;
          if ((sum | 0) == (5 | 0)) {
            images[(outImagePtr + offset + u) | 0] = 1;
          } else {
            images[(outImagePtr + offset + u) | 0] = 0;
          }
        }
      }
      return;
    }

    Although the basic code structure did not significantly change, the devil is in the detail. Instead of passing in the references to JavaScript objects, the respective indexes of the input and output images, pointing to the buffer, are used. Another noticeable difference is the repeated casting of values to integers with the | 0 notion, which is necessary for secure array access. There is also an additional variable offset defined, which is used as a counter to keep track of the absolute position in the buffer. This approach replaces the multiplication used for determining the current position. In general, asm.js does not allow multiplications of integers except when using the imul operator.

    Finally, the use of the tenary operator ( ? : ) is forbidden in asm.js which has simply been replaced by a regular if.. else condition.

    Performance comparison

    And now it is time to answer the more important question: How much faster is the asm.js implementation compared to regular JavaScript? Let’s take a look at the performance profiles, of which the first one represents the normal JavaScript version and the second asm.js.

    Image Stream Profile

    image-stream-profile-asm

    Surprisingly, the difference between the two implementations is not as big as you might expect (~10%). Apparently, the initial JavaScript code was already written clean enough, so that the JIT compiler could already take full advantage of that. This assumption can only be proven wrong or right if someone re-implements the algorithm in C/C++ and cross-compiles it to asm.js using emscripten. I’m almost sure that the result would differ from my naïve port and produce much more optimized code.

    getUserMedia

    Besides performance, there are many other parts that must fit together in order to get the best experience. One of those parts is the portal to the user’s world, the camera. As we all know, getUserMedia provides an API to gain access to the device’s camera. Here, the difficulty lies within the differences among all major browser vendors, where the constraints, resolutions and events are handled differently.

    Front/back-facing

    If you are targeting devices other than regular laptops or computers, the chances are high that these devices offer more than one camera. Nowadays almost every tablet or smartphone has a back- and front-facing camera. When using Firefox, selecting the camera programmatically is not possible. Every time the user confirms access to the camera, he or she has to select the desired one. This is handled differently in Chrome, where MediaStreamTrack.getSources exposes the available sources which can then be filtered. You can find the defined sources in the W3C draft.

    The following snippet demonstrates how to get preferred access to the user’s back-facing camera:

    MediaStreamTrack.getSources(function(sourceInfos) {
      var envSource = sourceInfos.filter(function(sourceInfo) {
        return sourceInfo.kind == "video"
            && sourceInfo.facing == "environment";
      }).reduce(function(a, source) {
        return source;
      }, null);
      var constraints = {
        audio : false,
        video : {
          optional : [{
            sourceId : envSource ? envSource.id : null
          }]
        }
      };
    });

    In the use-case of barcode-scanning, the user is most likely going to use the device’s back-facing camera. This is where choosing a camera up front can enormously improve the user experience.

    Resolution

    Another very important topic when working with video is the actual resolution of the stream. This can be controlled with additional constraints to the video stream.

    var hdConstraint = {
      video: {
        mandatory: {
          width: { min: 1280 },
          height: { min: 720 }
        }
      }
    };

    The above snippet, when added to the video constraints, tries to get a video-stream with the specified quality. If no camera meets those requirements, an ConstraintNotSatisfiedError error is returned in the callback. However, these constraints are not fully compatible with all browsers, since some use minWidth and minHeight instead.

    Autofocus

    Barcodes are typically rather small and must be close-up to the camera in order to be correctly identified. This is where a built-in auto-focus can help to increase the robustness of the detection algorithm. However, the getUserMedia API lacks functionality for triggering the auto-focus and most devices do not even support continuous autofocus in browser-mode. If you have an up-to-date Android device, chances are high that Firefox is able to use the autofocus of your camera (e.g. Nexus 5 or HTC One). Chrome on Android does not support it yet, but there is already an issue filed.

    Performance

    And there is still the question of the performance impact caused by grabbing the frames from the video stream. The results have already been presented in the profiling section. They show that almost 30%, or 8ms of CPU time is consumed for just fetching the image and storing it in a TypedArray instance. The typical process of reading the data from a video-source looks as follows:

    1. Make sure the camera-stream is attached to a video-element
    2. Draw the image to a canvas using ctx.drawImage
    3. Read the data from the canvas using ctx.getImageData
    4. Convert the video to gray-scale and store it inside a TypedArray
    var video = document.getElementById("camera"),
        ctx = document.getElementById("canvas").getContext("2d"),
        ctxData,
        width = video.videoWidth,
        height = video.videoHeight
        data = new Uint8Array(width*height);
     
    ctx.drawImage(video, 0, 0);
    ctxData = ctx.getImageData(0, 0, width, height).data;
    computeGray(ctxData, data);

    It would be very much appreciated if there were a way to get lower-level access to the camera frames without going through the hassle of drawing and reading every single image. This is especially important when processing higher resolution content.

    Wrap up

    It has been real fun to create a project centered on computer vision, especially because it connects so many parts of the web platform. Hopefully, limitations such as the missing auto-focus on mobile devices, or reading the camera stream, will be sorted out in the near future. Still, it is pretty amazing what you can build nowadays by simply using HTML and JavaScript.

    Another lesson learned is that implementing asm.js by hand is both hard and unnecessary if you already know how to write proper JavaScript code. However, if you already have an existing C/C++ codebase which you would like to port, emscripten does a wonderful job. This is where asm.js comes to the rescue.

    Finally, I hope more and more people are jumping on the computer vision path, even if technologies like WebCL are still a long way down the road. The future for Firefox might even be for ARB_compute_shader to eventually jump on to the fast track.

  2. Videos and Firefox OS

    Before HTML5

    Those were dark times Harry, dark times – Rubeus Hagrid

    Before HTML5, displaying video on the Web required browser plugins and Flash.

    Luckily, Firefox OS supports HTML5 video so we don’t need to support these older formats.

    Video support on the Web

    Even though modern browsers support HTML5, the video formats they support vary:

    In summary, to support the most browsers with the fewest formats you need the MP4 and WebM video formats (Firefox prefers WebM).

    Multiple sizes

    Now that you have seen what formats you can use, you need to decide on video resolutions, as desktop users on high speed wifi will expect better quality videos than mobile users on 3G.

    At Rormix we decided on 720p for desktop, 360p for mobile connections, and 180p specially for Firefox OS to reduce the cost in countries with high data charges.

    There are no hard and fast rules — it depends on who your market audience is.

    Streaming?

    The best streaming solution would be to automatically serve the user different videos sizes depending on their connection status (adaptive streaming) but support for this technology is poor.

    HTTP live streaming works well on Apple devices, but has poor support on Android.

    At the time of writing, the most promising technology is MPEG DASH, which is an international standard.

    In summary, we are going to have to wait before we get an adaptive streaming technology that is widely accepted (Firefox does not support HLS or MPEG DASH).

    DIY Adaptive streaming

    In the absence of adaptive streaming we need to try to work out the best video quality to load at the outset. The following is a quick guide to help you decide:

    Wifi or 3G

    Using a certified Firefox OS app you can check to see if the user is on wifi or not.

    var lock    = navigator.mozSettings.createLock();
    var setting = lock.get('wifi.enabled');
     
    setting.onsuccess = function () {
      console.log('wifi.enabled: ' + setting.result);
    }
     
    setting.onerror = function () {
      console.warn('An error occured: ' + setting.error);
    }

    https://developer.mozilla.org/en-US/docs/Web/API/Settings_API

    There is some more information at the W3C Device API.

    Detecting screen size

    There is no point sending a 720p video to a user with a screen smaller than 720p. There are many ways to get the different bounds of a user’s screen; innerWidth and width allow you to get a good idea:

    function getVidSize()
    {
      //Get the width of the phone (rotation independent)
      var min = Math.min($(window).innerHeight(),$(window).innerWidth());
      //Return a video size we have
      if(min < 320)      return '180';
      else if(min < 550) return '360';
      else               return '720';
    }

    http://www.quirksmode.org/m/tests/widthtest.html

    Determining internet speed

    It is difficult to get an accurate read of a user’s internet speed using web technologies — usually they involve loading a large image onto the user’s device and timing it. This has the disadvantage of having to send more data to the user. Some services such as: http://speedof.me/api.html exist, but still require data downloads to the user’s device. (Stackoverflow has some more options.)

    You can be slightly more clever by using HTML5, and checking the time it takes between the user starting the video and a set amount of the video loading. This way we do not need to load any extra data on the user’s device. A quick VideoJS example follows:

    var global_speedcount = 0;
    var global_video = null;
    global_video = videojs("video", {}, function(){
    //Set up video sources
    });
     
    global_video.on('play',function(){
      //User has clicked play
      global_speedcount = new Date().getTime();
    });
     
    function timer()
    {
      var diff = new Date().getTime() - global_speedcount;
      //Remove this handler as it is run multiple times per second!
      global_video.off('timeupdate',timer);
    }
     
    global_video.on('timeupdate',timer);

    This code starts timing when the user clicks play, and when the browser starts to play the video it sends timing information to timeupdate. You can also use this function to detect if lots of buffering is happening.

    Detect high resolution devices

    One final thing to determine is whether or not a user has a high pixel density screen. In this case even if they have a small screen it can still have a large number of pixels (and therefore require a higher resolution video).

    Modernizr has a plugin for detecting hi-res screens.

    if (Modernizr.highresdisplay)
    {
      alert('Your device has a high resolution screen');
    }

    WebP Thumbnails

    Not to get embroiled in an argument, but at Rormix we have seen an average decrease of 30% in file size (WebP vs JPEG) with no loss of quality (in some cases up to 50% less). And in countries with expensive data plans, the less data the better.

    We encode all of our thumbnails in multiple resolutions of WebP and send them to every device that supports them to reduce the amount of data being sent to the user.

    Mobile considerations

    If you are playing HTML5 videos on mobile devices, their behavior differs. On iOS it automatically goes to full screen on iPhones/iPods, but not on tablets.

    Some libraries such as VideoJS have removed the controls from mobile devices until their stability increases.

    Useful libraries

    There are a few useful HTML5 video libraries:

    Mozilla links

    Mozilla has some great articles on web video:

    Other useful Links

  3. Mozilla Hacks gets a new Editor

    Almost three and a half years ago I wrote my first article for Mozilla Hacks and have been the Editor since September 2012. As the face and caretaker of this blog for such a long time, having published 350 posts in two years, I want to take the opportunity to thank you all for reading, and to pass on the torch to its new Editor.

    I’ve been in the same team as Havi Hoffman since I started at Mozilla, and I’m now glad to announce that she is the new Editor of Mozilla Hacks!

    I know there are a number of interesting upcoming articles and I’d strongly recommend you keep on reading this blog. Also make sure to follow it on Twitter: @mozhacks.

    All the comments & interesting discussions we’ve had over the years have been much appreciated, and thanks to all the authors who have taken their valuable time to share their knowledge and to help creating a great Open Web.

    Thanks for now, and see you on the Internet!

  4. Firebug 3 & Multiprocess Firefox (e10s)

    Firebug 3 alpha was announced couple of weeks ago. This version represents the next generation of Firebug built on top of Firefox native developer tools.

    There are several reasons why having Firebug built on top of native developer tools in Firefox is an advantage — one of them is tight integration with the existing platform. This direction allows simple use of available platform components. This is important especially for upcoming multiprocess support in Firefox (also called Electrolysis or E10S).

    From wiki:

    The goal of the Electrolysis project (“e10s” for short) is to run web content in a separate process from Firefox itself. The two major advantages of this model are security and performance.

    The e10s project introduces a great leap ahead in terms of security and performance, as well as putting more emphasis on the internal architecture of add-ons. The main challenge (for many extensions) is solving communication problems between processes. The add-on’s code will run in a different process (browser chrome process) from web page content (page content process) — see the diagram below. Every time an extension needs to access the web page it must use one of the available inter-process communication channels (e.g. message manager or remote debugging protocol). Direct access is no longer possible. This often means that many of the existing synchronous APIs will turn into asynchronous APIs.

    Developer tools, including Firebug, deal with the content in many ways. Tools usually collect a large amount of (meta) data about the debugged page and present it to the user. Various CSS and DOM inspectors not only display internal content data, but also allow the user to edit them and see live changes. All these features require heavy interaction between a tool and the page content.

    So Firebug, built on top of the existing developer tools infrastructure that already ensures basic interaction with the debugged page, allows us to focus more on new features and user experience.

    Firebug Compatibility

    Firebug 2.0 is compatible with Firefox 30 – 36 and will support upcoming non-multiprocess browsers (as well as the recently announced browser for developers).

    Firebug 3.0 alpha (aka Firebug.next) is currently compatible with Firefox 35 – 36 and will support upcoming multiprocess (as well as non-multiprocess) browsers.

    Upgrade From Firebug 2

    If you install Firebug 2 into a multiprocess (e10s) enabled browser, you’ll be prompted to upgrade to Firebug 3 or switch off the multiprocess support.

    Learn more…

    Upgrade to Firebug 3 is definitely the recommended option. You might miss some features from Firebug 2 in Firebug 3 (it’s still in alpha phase) like Firebug extensions, but this is the right time to provide feedback and let us know what the priority features are for you.

    You can follow us on Twitter to be updated.

    Leave a comment here or on the Firebug newsgroup.

    Jan ‘Honza’ Odvarko

  5. MetricsGraphics.js – a lightweight graphics library based on D3

    MetricsGraphics.js is a library built on top of D3 that is optimized for visualizing and laying out time-series data. It provides a simple way to produce common types of graphics in a principled and consistent way. The library supports line charts, scatterplots, histograms, barplots and data tables, as well as features like rug plots and basic linear regression.

    The library elevates the layout and explanation of these graphics to the same level of priority as the graphics. The emergent philosophy is one of efficiency and practicality.

    Hamilton Ulmer and I began building the library earlier this year, during which time we found ourselves copy-and-pasting bits of code in various projects. This led to errors and inconsistent features, and so we decided to develop a single library that provides common functionality and aesthetics to all of our internal projects.

    Moreover, at the time, we were having limited success with our attempts to get casual programmers and non-programmers within the organization to use a library like D3 to create dashboards. The learning curve was proving a bit of an obstacle. So it seemed reasonable to create a level of indirection using well-established design patterns to try and bridge that chasm.

    Our API is simple. All that’s needed to create a graphic is to specify a few default parameters and then, if desired, override one or more of the optional parameters on offer. We don’t maintain state. To update a graphic, one would call data_graphic on the same target element.

    The library is also data-source agnostic. While it provides a number of convenience functions and options that allow for graphics to better handle things like missing observations, it doesn’t care where the data comes from.

    A quick tutorial

    Here’s a quick tutorial to get you started. Say that we have some data on a scholarly topic like UFO sightings. We decide that we’re interested in creating a line chart of yearly sightings.

    We create a JSON file called data/ufo-sightings.json based on the original dataset, where we aggregate yearly sightings. The data doesn’t have to be JSON of course, but that will mean less work later on.

    The next thing we do is load the data:

    d3.json('data/ufo-sightings.json', function(data) {
    })

    data_graphic expects the data object to be an array of objects, which is already the case for us. That’s good. It also needs dates to be timestamps if they’re in a format like yyyy-mm-dd. We’ve got aggregated yearly data, so we don’t need to worry about that. So now, all we need to do is create the graphic and place it in the element specified in target.

    d3.json('data/ufo-sightings.json', function(data) {
        data_graphic({
            title: "UFO Sightings",
            description: "Yearly UFO sightings (1945 to 2010).",
            data: data,
            width: 650,
            height: 150,
            target: '#ufo-sightings',
            x_accessor: 'year',
            y_accessor: 'sightings',
            markers: [{'year': 1964, 
                       'label': '"The Creeping Terror" released'
            }]
        })
    })

    And this is what we end up with. In this example, we’re adding a marker to draw attention to a particular data point. This is optional of course.

    A line chart in MetricsGraphics.js

    A few final remarks

    We follow a real-needs approach to development. Right now, we have mostly implemented features that have been important to us. Having said that, our work is available on Github, as are many of our discussions, and we take any and all pull requests and issues seriously.

    There is still a lot of work to be done. We invite you to take the library out for a spin and file bugs! We’ve set up a sandbox that you can use to try things out without having to download anything: http://metricsgraphicsjs.org

    MetricsGraphics.js v1.1 is scheduled for release on December 1, 2014.

  6. Save the Web – Be a Ford-Mozilla Open Web Fellow

    This is a critical time in the evolution of the Web. Its core ethos of being free and open is at risk with too little interoperability and threats to privacy, security, and expression from governments throughout the world.

    To protect the Web, we need more people with technical expertise to get involved at the policy level. That’s why we created the Ford-Mozilla Open Web Fellowship.

    Photo: Joseph Gruber via Flickr

    What it is

    The Fellowship is a 10-month paid program that immerses engineers, data scientists, and makers in projects that create a better understanding of Internet policy issues among civil society, policy makers, and the broader public.

    What you’ll do

    Fellows will be embedded in one of five host organizations, each of which is leading in the fight to protect the open Web:

    • The American Civil Liberties Union
    • Public Knowledge
    • Free Press
    • The Open Technology Institute
    • Amnesty International

    The Fellows will serve as technology advisors, mentors, and ambassadors to these host organizations, helping to better inform the policy discussion.

    ​Photo: Alain Christian via Flickr

    What you’ll learn

    The program is a great opportunity for emerging technical leaders to take the next step in their careers. Fellows will have the opportunity to further develop their technical skills, learn about critical Internet policy issues, make strong connections in the policy field, and be recognized for their contributions.

    The standard fellowship offers a stipend of $60,000 over 10 months plus supplements for travel, housing, child care, health insurance, moving expenses, and helps pay for research/equipment and books.

    For more information, and to apply by the December 31 deadline, please visit http://advocacy.mozilla.org.

  7. Visually Representing Angular Applications

    This article concerns diagrammatically representing Angular applications. It is a first step, not a fully figured out dissertation about how to visual specify or document Angular apps. And maybe the result of this is that I, with some embarrassment, find out that someone else already has a complete solution.

    My interest in this springs from two ongoing projects:

    1. My day job working on the next generation version of Desk.com‘s support center agent application and
    2. My night job working on a book, Angular In Depth, for Manning Publications

    1: Large, complex Angular application

    The first involves working on a large, complex Angular application as part of a multi-person front-end team. One of the problems I, and I assume other team members encounter (hopefully I’m not the only one), is getting familiar enough with different parts of the application so my additions or changes don’t hose it or cause problems down the road.

    With Angular application it is sometimes challenging to trace what’s happening where. Directives give you the ability to encapsulate behavior and let you employ that behavior declaratively. That’s great. Until you have nested directives or multiple directives operating in tandem that someone else painstakingly wrote. That person probably had a clear vision of how everything related and worked together. But, when you come to it newly, it can be challenging to trace the pieces and keep them in your head as you begin to add features.

    Wouldn’t it be nice to have a visual representation of complex parts of an Angular application? Something that gives you the lay-of-the-land so you can see at a glance what depends on what.

    2: The book project

    The second item above — the book project — involves trying to write about how Angular works under-the-covers. I think most Angular developers have at one time or another viewed some part of Angular as magical. We’ve also all cursed the documentation, particularly those descriptions that use terms whose descriptions use terms whose descriptions are poorly defined based on an understanding of the first item in the chain.

    There’s nothing wrong with using Angular directives or services as demonstrated in online examples or in the documentation or in the starter applications. But it helps us as developers if we also understand what’s happening behind the scenes and why. Knowing how Angular services are created and managed might not be required to write an Angular application, but the ease of writing and the quality can be, I believe, improved by better understanding those kinds of details.

    Visual representations

    In the course of trying to better understand Angular behind-the-scenes and write about it, I’ve come to rely heavily on visual representations of the key concepts and processes. The visual representations I’ve done aren’t perfect by any means, but just working through how to represent a process in a diagram has a great clarifying effect.

    There’s nothing new about visually representing software concepts. UML, process diagrams, even Business Process Modeling Notation (BPMN) are ways to help visualize classes, concepts, relationships and functionality.

    And while those diagramming techniques are useful, it seems that at least in the Angular world, we’re missing a full-bodied visual language that is well suited to describe, document or specify Angular applications.

    We probably don’t need to reinvent the wheel here — obviously something totally new is not needed — but when I’m tackling a (for me) new area of a complex application, having available a customized visual vocabulary to represent it would help.

    Diagrammatically representing front-end JavaScript development

    I’m working with Angular daily so I’m thinking specifically about how to represent an Angular application but this may also be an issue within the larger JavaScript community: how to diagrammatically represent front-end JavaScript development in a way allows us to clearly visualize our models, controllers and views, and the interactions between the DOM and our JavaScript code including a event-driven, async callbacks. In other words, a visual domain specific language (DSL) for client-side JavaScript development.

    I don’t have a complete answer for that, but in self-defense I started working with some diagrams to roughly represent parts of an Angular application. Here’s sort of the sequence I went through to arrive at a first cut:

    1. The first thing I did was write out a detailed description of the problem and what I wanted out of an Angular visual DSL. I also defined some simple abbreviations to use to identify the different types of Angular “objects” (directives, controllers, etc.). Then I dove in began diagramming.
    2. I identified the area of code I needed to understand better, picked a file and threw it on the diagram. What I wanted to do was to diagram it in such a way that I could look at that one file and document it without simultaneously having to trace everything to which it connected.
    3. When the first item was on the diagram, I went to something on which it depended. For example, starting with a directive this leads to associated views or controllers. I diagrammed the second item and added the relationship.
    4. I kept adding items and relationships including nested directives and their views and controllers.
    5. I continued until the picture made sense and I could see the pieces involved in the task I had to complete.

    Since I was working on a specific ticket, I knew the problem I needed to solve so not all information had to be included in each visual element. The result is rough and way too verbose, but it did accomplish:

    • Showing me the key pieces and how they related, particularly the nested directives.
    • Including useful information on where methods or $scope properties lived.
    • Giving a guide to the directories where each item lives.

    It’s not pretty but here is the result:

    This represents a somewhat complicated part of the code and having the diagram helped in at least four ways:

    • By going through the exercise of creating it, I learned the pieces involved in an orderly way — and I didn’t have to try to retain the entire structure in my head as I went.
    • I got the high-level view I needed.
    • It was very helpful when developing, particularly since the work got interrupted and I had to come back to it a few days later.
    • When the work was done, I added it to our internal WIKI to ease future ramp-up in the area.

    I think the some next steps might be to define and expand the visual vocabulary by adding things such as:

    • Unique shapes or icons to identify directives, controllers, views, etc.
    • Standardize how to represent the different kinds of relationships such as ng-include or a view referenced by a directive.
    • Standardize how to represent async actions.
    • Add representations of the model.

    As I said in the beginning, this is rough and nowhere near complete, but it did confirm for me the potential value of having a diagramming convention customized for JavaScript development. And in particular, it validated the need for a robust visual DSL to explore, explain, specify and document Angular applications.

  8. interact.js for drag and drop, resizing and multi-touch gestures

    interact.js is a JavaScript module for Drag and drop, resizing and multi-touch gestures with inertia and snapping for modern browsers (and also IE8+).

    Background

    I started it as part of my GSoC 2012 project for Biographer‘s network visualization tool. The tool was a web app which rendered to an SVG canvas and used jQuery UI for drag and drop, selection and resizing. Because jQuery UI has little support for SVG, heavy workarounds had to be used. I needed to make the web app more usable on smartphones and tablets and the largest chunk of this work was to replace jQuery UI with interact.js which:

    • is lightweight,
    • works well with SVG,
    • handles multi-touch input,
    • leaves the task of rendering/styling elements to the application and
    • allows the application to supply object dimensions instead of parsing element styles or getting DOMRects.

    What interact.js tries to do is present input data consistently across different browsers and devices and provide convenient ways to pretend that the user did something that they didn’t really do (snapping, inertia, etc.).

    Certain sequences of user input can lead to InteractEvents being fired. If you add event listeners for an event type, that function is given an InteractEvent object which provides pointer coordinates and speed and, in gesture events, scale, distance, angle, etc. The only time interact.js modifies the DOM is to style the cursor; making an element move while a drag happens has to be done from your own event listeners. This way you’re in control of everything that happens.

    Slider demo

    Here’s an example of how you could make a slider with interact.js. You can view and edit the complete HTML, CSS and JS of all the demos in this post on CodePen.

    See the Pen interact.js simple slider by Taye A (@taye) on CodePen.

    JavaScript rundown

    interact('.slider')                   // target the matches of that selector
      .origin('self')                     // (0, 0) will be the element's top-left
      .restrict({drag: 'self'})           // keep the drag within the element
      .inertia(true)                      // start inertial movement if thrown
      .draggable({                        // make the element fire drag events
        max: Infinity                     // allow drags on multiple elements
      })
      .on('dragmove', function (event) {  // call this function on every move
        var sliderWidth = interact.getElementRect(event.target.parentNode).width,
            value = event.pageX / sliderWidth;
     
        event.target.style.paddingLeft = (value * 100) + '%';
        event.target.setAttribute('data-value', value.toFixed(2));
      });
     
    interact.maxInteractions(Infinity);   // Allow multiple interactions
    • interact('.slider') [docs] creates an Interactable object which targets elements that match the '.slider' CSS selector. An HTML or SVG element object could also have been used as the target but using a selector lets you use the same settings for multiple elements.
    • .origin('self') [docs] tells interact.js to modify the reported coordinates so that an event at the top-left corner of the target element would be (0,0).
    • .restrict({drag: 'self'}) [docs] keeps the coordinates within the area of the target element.
    • .inertia(true) [docs] lets the user “throw” the target so that it keeps moving after the pointer is released.
    • Calling .draggable({max: Infinity}) [docs] on the object:
      • allows drag listeners to be called when the user drags from an element that matches the target and
      • allows multiple target elements to be dragged simultaneously
    • .on('dragmove', function (event) {...}) [docs] adds a listener for the dragmove event. Whenever a dragmove event occurs, all listeners for that event type that were added to the target Interactable are called. The listener function here calculates a value from 0 to 1 depending on which point along the width of the slider the drag happened. This value is used to position the handle.
    • interact.maxInteractions(Infinity) [docs] is needed to enable multiple interactions on any target. The default value is 1 for backwards compatibility.

    A lot of differences in browser implementations are resolved by interact.js. MouseEvents, TouchEvents and PointerEvents would produce identical drag event objects so this slider works on iOS, Android, Firefox OS and Windows RT as well as on desktop browsers as far back as IE8.

    Rainbow pixel canvas demo

    interact.js is useful for more than moving elements around a page. Here I use it for drawing onto a canvas element.

    See the Pen interact.js pixel rainbow canvas by Taye A (@taye) on CodePen.

    JavaScript rundown

    var pixelSize = 16;
     
    interact('.rainbow-pixel-canvas')
      .snap({
        // snap to the corners of a grid
        mode: 'grid',
        // specify the grid dimensions
        grid: { x: pixelSize, y: pixelSize }
      })
      .origin('self')
      .draggable({
        max: Infinity,
        maxPerElement: Infinity
      })
      // draw colored squares on move
      .on('dragmove', function (event) {
        var context = event.target.getContext('2d'),
            // calculate the angle of the drag direction
            dragAngle = 180 * Math.atan2(event.dx, event.dy) / Math.PI;
     
        // set color based on drag angle and speed
        context.fillStyle = 'hsl(' + dragAngle + ', 86%, '
                            + (30 + Math.min(event.speed / 1000, 1) * 50) + '%)';
     
        // draw squares
        context.fillRect(event.pageX - pixelSize / 2, event.pageY - pixelSize / 2,
                         pixelSize, pixelSize);
      })
      // clear the canvas on doubletap
      .on('doubletap', function (event) {
        var context = event.target.getContext('2d');
     
        context.clearRect(0, 0, context.canvas.width, context.canvas.height);
      });
     
      function resizeCanvases () {
        [].forEach.call(document.querySelectorAll('.rainbow-pixel-canvas'), function (canvas) {
          canvas.width = document.body.clientWidth;
          canvas.height = window.innerHeight * 0.7;
        });
      }
     
      // interact.js can also add DOM event listeners
      interact(document).on('DOMContentLoaded', resizeCanvases);
      interact(window).on('resize', resizeCanvases);
     
    interact.maxInteractions(Infinity);

    Snapping is used to modify the pointer coordinates so that they are always aligned to a grid.

      .snap({
        // snap to the corners of a grid
        mode: 'grid',
        // specify the grid dimensions
        grid: { x: pixelSize, y: pixelSize }
      })

    Like in the previous demo, multiple drags are enabled but an extra option, maxPerElement, needs to be changed to allow multiple drags on the same element.

      .draggable({
        max: Infinity,
        maxPerElement: Infinity
      })

    The movement angle is calculated with Math.atan2(event.dx, event.dy) and that’s used to set the hue of the paint color. event.speed is used to adjust the lightness.

    interact.js has tap and double tap events which are equivalent to click and double click but without the delay on mobile devices. Also, unlike regular click events, a tap isn’t fired if the mouse is moved before being released. (I’m working on adding more events like these).

      // clear the canvas on doubletap
      .on('doubletap', function (event) {
        ...

    It can also listen for regular DOM events. In the above demo it’s used to listen for window resize and document DOMContentLoaded.

      interact(document).on('DOMContentLoaded', resizeCanvases);
      interact(window).on('resize', resizeCanvases);

    Similar to jQuery, It can also be used for delegated events. For example:

    interact('input', { context: document.body })
      .on('keypress', function (event) {
        console.log(event.key);
      });

    Supplying element dimensions

    To get element dimensions interact.js normally uses:

    • Element#getBoundingClientRect() for SVGElements and
    • Element#getClientRects()[0] for HTMLElements (because it includes the element’s borders)

    and adds page scroll. This is done when checking which action to perform on an element, checking for drops, calculating 'self' origin and in a few other places. If your application keeps the dimensions of elements that are being interacted with, then it makes sense to use the application’s data instead of getting the DOMRect. To allow this, Interactables have a rectChecker() [docs] method to change how elements’ dimensions are gotten. The method takes a function as an argument. When interact.js needs an element’s dimensions, the element is passed to that function and the return value is used.

    Graphic Editor Demo

    The “SVG editor” below has a Rectangle class to represent <rect class="edit-rectangle"/> elements in the DOM. Each rectangle object has dimensions, the element that the user sees and a draw method.

    See the Pen Interactable#rectChecker demo by Taye A (@taye) on CodePen.

    JavaScript rundown

    var svgCanvas = document.querySelector('svg'),
        svgNS = 'http://www.w3.org/2000/svg',
        rectangles = [];
     
    function Rectangle (x, y, w, h, svgCanvas) {
      this.x = x;
      this.y = y;
      this.w = w;
      this.h = h;
      this.stroke = 5;
      this.el = document.createElementNS(svgNS, 'rect');
     
      this.el.setAttribute('data-index', rectangles.length);
      this.el.setAttribute('class', 'edit-rectangle');
      rectangles.push(this);
     
      this.draw();
      svgCanvas.appendChild(this.el);
    }
     
    Rectangle.prototype.draw = function () {
      this.el.setAttribute('x', this.x + this.stroke / 2);
      this.el.setAttribute('y', this.y + this.stroke / 2);
      this.el.setAttribute('width' , this.w - this.stroke);
      this.el.setAttribute('height', this.h - this.stroke);
      this.el.setAttribute('stroke-width', this.stroke);
    }
     
    interact('.edit-rectangle')
      // change how interact gets the
      // dimensions of '.edit-rectangle' elements
      .rectChecker(function (element) {
        // find the Rectangle object that the element belongs to
        var rectangle = rectangles[element.getAttribute('data-index')];
     
        // return a suitable object for interact.js
        return {
          left  : rectangle.x,
          top   : rectangle.y,
          right : rectangle.x + rectangle.w,
          bottom: rectangle.y + rectangle.h
        };
      })

    Whenever interact.js needs to get the dimensions of one of the '.edit-rectangle' elements, it calls the rectChecker function that was specified. The function finds the Rectangle object using the element argument then creates and returns an appropriate object with left, right, top and bottom properties.

    This object is used for restricting when the restrict elementRect option is set. In the slider demo from earlier, restriction used only the pointer coordinates. Here, restriction will try to prevent the element from being dragged out of the specified area.

      .inertia({
        // don't jump to the resume location
        // https://github.com/taye/interact.js/issues/13
        zeroResumeDelta: true
      })
      .restrict({
        // restrict to a parent element that matches this CSS selector
        drag: 'svg',
        // only restrict before ending the drag
        endOnly: true,
        // consider the element's dimensions when restricting
        elementRect: { top: 0, left: 0, bottom: 1, right: 1 }
      })

    The rectangles are made draggable and resizable.

      .draggable({
        max: Infinity,
        onmove: function (event) {
          var rectangle = rectangles[event.target.getAttribute('data-index')];
     
          rectangle.x += event.dx;
          rectangle.y += event.dy;
          rectangle.draw();
        }
      })
      .resizable({
        max: Infinity,
        onmove: function (event) {
          var rectangle = rectangles[event.target.getAttribute('data-index')];
     
          rectangle.w = Math.max(rectangle.w + event.dx, 10);
          rectangle.h = Math.max(rectangle.h + event.dy, 10);
          rectangle.draw();
        }
      });
     
    interact.maxInteractions(Infinity);

    Development and contributions

    I hope this article gives a good overview of how to use interact.js and the types of applications that I think it would be useful for. If not, there are more demos on the project homepage and you can throw questions or issues at Twitter or Github. I’d really like to make a comprehensive set of examples and documentation but I’ve been too busy with fixes and improvements. (I’ve also been too lazy :-P).

    Since the 1.0.0 release, user comments and contributions have led to loads of bug fixes and many new features including:

    So please use it, share it, break it and help to make it better!

  9. jsDelivr and its open-source load balancing algorithm

    This is a guest post by Dmitriy Akulov of jsDelivr.

    Recently I wrote about jsDelivr and what makes it unique where I described in detail about the features that we offer and how our system works. Since then we improved a lot of stuff and released even more features. But the biggest one is was the open source of our load balancing algorithm.

    As you know from the previous blog post we are using Cedexis to do our load balancing. In short we collect millions of RUM (Real User Metrics) data points from all over the world. When a user visits a website partner of Cedexis or ours a JavaScript is executed in the background that does performance checks to our core CDNs, MaxCDN and CloudFlare, and sends this data back to Cedexis. We can then use it to do load balancing based on real time performance information from real life users and ISPs. This is important as it allows us to mitigate outages that CDNs can experience in very localized areas such as a single country or even a single ISP and not worldwide.

    Open-sourcing the load balancing code

    Now our load balancing code is open to everybody to review, test and even send their own Pull Requests with improvements and modifications.

    Until recently the code was actually written in PHP, but due to performance issues and other problems that arrised from that it was decided to switch to JavaScript. Now the DNS application is completely written in js and I will try to explain how exactly it works.

    This is an application that runs on a DNS level and integrates with Cedexis’ API. Every DNS request made to cdn.jsdelivr.net is processed by the following code and then based on all the variables it returns a CNAME that the client can use to get the requested asset.

    Declaring providers

    The first step is to declare our providers:

    providers: {
        'cloudflare': 'cdn.jsdelivr.net.cdn.cloudflare.net',
        'maxcdn': 'jsdelivr3.dak.netdna-cdn.com',
        ...
    },

    This array contains all the aliases of our providers and the hostnames that we can return if the provider is then chosen. We actually use a couple of custom servers to improve the performance in locations that the CDNs lack but we are currently in the process of removing all of them in favor of more enterprise CDNs that wish to sponsor us.

    Before I explain the next array I want to skip to line 40:

    defaultProviders: [ 'maxcdn', 'cloudflare' ],

    Because our CDN providers get so much more RUM tests than our custom servers their data and in turn the load balancing results are much more reliable and better. This is why by default only MaxCDN and CloudFlare are considered for any user request. And its actually the main reason we want to sunset our custom servers.

    Country mapping

    Now that you know that comes our next array:

    countryMapping: {
        'CN': [ 'exvm-sg', 'cloudflare' ],
        'HK': [ 'exvm-sg', 'cloudflare' ],
        'ID': [ 'exvm-sg', 'cloudflare' ],
        'IT': [ 'prome-it', 'maxcdn', 'cloudflare' ],
        'IN': [ 'exvm-sg', 'cloudflare' ],
        'KR': [ 'exvm-sg', 'cloudflare' ],
        'MY': [ 'exvm-sg', 'cloudflare' ],
        'SG': [ 'exvm-sg', 'cloudflare' ],
        'TH': [ 'exvm-sg', 'cloudflare' ],
        'JP': [ 'exvm-sg', 'cloudflare', 'maxcdn' ],
        'UA': [ 'leap-ua', 'maxcdn', 'cloudflare' ],
        'RU': [ 'leap-ua', 'maxcdn' ],
        'VN': [ 'exvm-sg', 'cloudflare' ],
        'PT': [ 'leap-pt', 'maxcdn', 'cloudflare' ],
        'MA': [ 'leap-pt', 'prome-it', 'maxcdn', 'cloudflare' ]
    },

    This array contains country mappings that override the “defaultProviders” parameter. This is where the custom servers currently come to use. For some countries we know 100% that our custom servers can be much faster than our CDN providers so we manually specify them. Since these locations are few we only need to create handful of rules.

    ASN mappings

    asnMapping: {
        '36114': [ 'maxcdn' ], // Las Vegas 2
        '36351': [ 'maxcdn' ], // San Jose + Washington
        '42473': [ 'prome-it' ], // Milan
        '32489': [ 'cloudflare' ], // Canada
        ...
    },

    ASN mappings contains overrides per ASN. Currently we are using them to improve the results of Pingdom tests. The reason for this is because we rely on RUM results to do load balancing we never get any performance tests for ASNs used by hosting providers such as companies where Pingdom rents their servers. So the code is forced to failover to country level performance data to chose the best provider for Pingdom and any other synthetic test and server. This data is not always reliable because not all ISPs have the same performance with a CDN provider as the fastest CDN provider country-wide. So we tweak some ASNs to work better with jsDelivr.

    More settings

    • lastResortProvider sets the CDN provider we want to use in case the application fails to chose one itself. This should be very rare.
    • defaultTtl: 20 is the TTL for our DNS record. We made some tests and decided that this was the most optimal value. In worst case scenario in case of downtime the maximum downtime jsDelivr can have is 20 seconds. Plus our DNS and our CDN are fast enough to compensate for the extra DNS latency every 20 seconds without having any impact on performance.
    • availabilityThresholds is a value in percentage and sets the uptime below which a provider should be considered down. This is based on RUM data. Again because of some small issues with synthetic tests we had to lower the Pingdom threshold. The Pingdom value does not impact anyone else.
    • sonarThreshold Sonar is a secondary uptime monitor we use to ensure the uptime of our providers. It runs every 60 seconds and checks all of our providers including their SSL certificates. If something is wrong our application will pick up the change in uptime and if it drops below this threshold it will be considered as down.
    • And finally minValidRtt is there to filter out all invalid RUM tests.

    The initialization process

    Next our app starts the initialization process. Wrong config and uptime not meeting our criteria is checked and all providers not matching our criteria are then removed from the potential candidates for this request.

    Next we create a reasons array for debugging purposes and apply our override settings. Here we use Cedexis API to get the latest live data for sonar uptime, rum update and HTTP performance.

    sonar = request.getData('sonar');
    candidates = filterObject(request.getProbe('avail'), filterCandidates);
    //console.log('candidates: ' + JSON.stringify(candidates));
    candidates = joinObjects(candidates, request.getProbe('http_rtt'), 'http_rtt');
    //console.log('candidates (with rtt): ' + JSON.stringify(candidates));
    candidateAliases = Object.keys(candidates);

    In case of uptime we also filter our bad providers that dont meet our criteria of uptime by calling the filterCandidates function.

    function filterCandidates(candidate, alias) {
        return (-1 < subpopulation.indexOf(alias))
        && (candidate.avail !== undefined)
        && (candidate.avail >= availabilityThreshold)
        && (sonar[alias] !== undefined)
        && (parseFloat(sonar[alias]) >= settings.sonarThreshold);
    }

    The actual decision making is performed by a rather small code:

    if (1 === candidateAliases.length) {
        decisionAlias = candidateAliases[0];
        decisionReasons.push(reasons.singleAvailableCandidate);
        decisionTtl = decisionTtl || settings.defaultTtl;
    } else if (0 === candidateAliases.length) {
        decisionAlias = settings.lastResortProvider;
        decisionReasons.push(reasons.noneAvailableOrNoRtt);
        decisionTtl = decisionTtl || settings.defaultTtl;
    } else {
        candidates = filterObject(candidates, filterInvalidRtt);
        //console.log('candidates (rtt filtered): ' + JSON.stringify(candidates));
        candidateAliases = Object.keys(candidates);
        if (!candidateAliases.length) {
        decisionAlias = settings.lastResortProvider;
        decisionReasons.push(reasons.missingRttForAvailableCandidates);
        decisionTtl = decisionTtl || settings.defaultTtl;
    } else {
        decisionAlias = getLowest(candidates, 'http_rtt');
        decisionReasons.push(reasons.rtt);
        decisionTtl = decisionTtl || settings.defaultTtl;
    }
    }
        response.respond(decisionAlias, settings.providers[decisionAlias]);
        response.setReasonCode(decisionReasons.join(''));
        response.setTTL(decisionTtl);
    };

    In case we only have 1 provider left after our checks we simply select that provider and output the CNAME, if we have 0 providers left then the lastResortProvider is used. Otherwise if everything is ok and we have more than 1 provider left we do more checks.

    Once we have left with providers that are currently online and don’t have any issues with their performance data we sort them based on RUM HTTP performance and push the CNAME out for the user’s browser to use.

    And thats it. Most of the other stuff like fallback to country level data is automatically done in backend and we only get the actual data we can use in our application.

    Conclusion

    I hope you found it interesting and learned more about what you should be considering when doing load balancing especially based on RUM data.

    Check out jsDelivr and feel free to use it in your projects. If you are interested to help we are also looking for node.js developers and designers to help us out.

    We are also looking for companies sponsors to help us grow even faster.

  10. Mozilla Introduces the First Browser Built For Developers: Firefox Developer Edition

    Developers are critical to the continued success of the Web. The content and apps they create compel us to come back to the Web every day, whether on a computer or mobile phone.

    In celebration of the 10th anniversary of Firefox, we’re excited to unveil Firefox Developer Edition, the first browser created specifically for developers.

    Ten years ago, we built Firefox for early adopters and developers to give them more choice and control. Firefox integrated WebAPIs and Add-ons to enable people to get the most out of the Web. Now we’re giving developers the whole browser as a hard-hat area, allowing us to bring front and center the features most relevant to them. Having a dedicated developer browser means we can tailor the browsing experience to what developers do every day.

    Because Firefox is part of an open-source, independent community and not part of a proprietary ecosystem, we’re able to offer features other browsers can’t by applying our tools everywhere the Web goes, regardless of platform or device.

    One of the biggest pain points for developers is having to use numerous siloed development environments in order to create engaging content or for targeting different app stores. For these reasons, developers often end up having to bounce between different platforms and browsers, which decreases productivity and causes frustration.

    Firefox Developer Edition solves this problem by creating a focal point to streamline your development workflow. It’s a stable developer browser which is not only a powerful authoring tool but also robust enough for everyday browsing. It also adds new features that simplify the process of building for the entire Web, whether targeting mobile or desktop across many different platforms.

    If you’re an experienced developer, you’ll already be familiar with the installed tools so you can focus on developing your content or app as soon as you open the browser. There’s no need to download additional plugins or applications to debug mobile devices. If you’re a new Web developer, the streamlined workflow and the fact that everything is already set up and ready to go makes it easier to get started building sophisticated applications.

    So what’s under the hood?

    The first thing you’ll notice is the distinctive dark design running through the browser. We applied the developer tools theme to the entire browser. It’s trim and sharp and focused on saving space for the content on your screen. It also fits in with the darker look common among creative app development tools.

    We’ve also integrated two powerful new features, Valence and WebIDE that improve workflow and help you debug other browsers and apps directly from within Firefox Developer Edition.

    Valence (previously called Firefox Tools Adapter) lets you develop and debug your app across multiple browsers and devices by connecting the Firefox dev tools to other major browser engines. Valence also extends the awesome tools we’ve built to debug Firefox OS and Firefox for Android to the other major mobile browsers including Chrome on Android and Safari on iOS. So far these tools include our Inspector, Debugger and Console and Style Editor.

    WebIDE allows you to develop, deploy and debug Web apps directly in your browser, or on a Firefox OS device. It lets you create a new Firefox OS app (which is just a web app) from a template, or open up the code of an existing app. From there you can edit the app’s files. It’s one click to run the app in a simulator and one more to debug it with the developer tools.

    Firefox Developer Edition also includes all the tools experienced Web developers are familiar with, including:

    • Responsive Design Mode – see how your website or Web app will look on different screen sizes without changing the size of your browser window.
    • Page Inspector- examine the HTML and CSS of any Web page and easily modify the structure and layout of a page.
    • Web Console – see logged information associated with a Web page and use Web Console and interact with a Web page using JavaScript.
    • JavaScript Debugger – step through JavaScript code and examine or modify its state to help track down bugs.
    • Network Monitor – see all the network requests your browser makes, how long each request takes and details of each request.
    • Style Editor – view and edit CSS styles associated with a Web page, create new ones and apply existing CSS stylesheets to any page.
    • Web Audio Editor – inspect and interact with Web Audio API in real time to ensure that all audio nodes are connected in the way you expect.

    Give it a try and let us know what you think. We’re keen to hear your feedback.

    More Information: