Mozilla

Video Articles

Sort by:

View:

  1. an update on open video codecs and quality

    Two days ago we posted a comparison by Greg Maxwell of low and medium resolution YouTube videos vs. Theora counterparts at the same bit rates. The result in that test was that Theora did much better at the low bit rate and more or less the same at the slightly higher bit rate. The conclusion being that Theora is perfectly appropriate for a site like YouTube.

    Now Maik Merten has done the same with videos at HD resolution, comparing videos encoded by YouTube and a video encoded with the new Theora encoder at a managed bitrate. The results? Go have a look at the images in the post. Tell us if you can honestly see a major difference. We can’t.

    H.264
    Theora

  2. Video, Mobile, and the Open Web

    [Also posted at brendaneich.com.]

    I wrote The Open Web and Its Adversaries just over five years ago, based on the first SXSW Browser Wars panel (we just had our fifth, it was great — thanks to all who came).

    Some history

    The little slideshow I presented is in part quaint. WPF/E and Adobe Apollo, remember those? (Either the code names, or the extant renamed products?) The Web has come a long way since 2007.

    But other parts of my slideshow are still relevant, in particular the part where Mozilla and Opera committed to an unencumbered <video> element for HTML5:

    • Working with Opera via WHATWG on <video>
      • Unencumbered Ogg Theora decoder in all browsers
      • Ogg Vorbis for <audio>
      • Other formats possible
      • DHTML player controls

    We did what we said we would. We fought against the odds. We carried the unencumbered HTML5 <video> torch even when it burned our hands.

    We were called naive (no) idealists (yes). We were told that we were rolling a large stone up a tall hill (and how!). We were told that we could never overcome the momentum behind H.264 (possibly true, but Mozilla was not about to give up and pay off the patent rentiers).

    Then in 2009 Google announced that it would acquire On2 (completed in 2010), and Opera and Mozilla had a White Knight.

    At Google I/O in May 2010, Adobe announced that it would include VP8 (but not all of WebM?) support in an upcoming Flash release.

    On January 11, 2011, Mike Jazayeri of Google blogged:

    … we are changing Chrome’s HTML5 <video> support to make it consistent with the codecs already supported by the open Chromium project. Specifically, we are supporting the WebM (VP8) and Theora video codecs, and will consider adding support for other high-quality open codecs in the future. Though H.264 plays an important role in video, as our goal is to enable open innovation, support for the codec will be removed and our resources directed towards completely open codec technologies.

    These changes will occur in the next couple months….

    A followup post three days later confirmed that Chrome would rely on Flash fallback to play H.264 video.

    Where we are today

    It is now March 2012 and the changes promised by Google and Adobe have not been made.

    What’s more, any such changes are irrelevant if made only on desktop Chrome — not on Google’s mobile browsers for Android — because authors typically do not encode twice (once in H.264, once in WebM), they instead write Flash fallback in an <object> tag nested inside the <video> tag. Here’s an example adapted from an Opera developer document:

    <video controls poster="video.jpg" width="854" height="480">
     <source src="video.mp4" type="video/mp4">
     <object type="application/x-shockwave-flash" data="player.swf"
             width="854" height="504">
      <param name="allowfullscreen" value="true">
      <param name="allowscriptaccess" value="always">
      <param name="flashvars" value="file=video.mp4">
      <!--[if IE]><param name="movie" value="player.swf"><![endif]-->
      <img src="video.jpg" width="854" height="480" alt="Video">
      <p>Your browser can't play HTML5 video.
     </object>
    </video>
    

    The Opera doc nicely carried the unencumbered video torch by including

     <source src="video.webm" type="video/webm">
    

    after the first <source> child in the <video> container (after the first, because of an iOS WebKit bug, the Opera doc said), but most authors do not encode twice and host two versions of their video (yes, you who do are to be commended; please don’t spam my blog with comments, you’re not typical — and YouTube is neither typical nor yet completely transcoded [1]).

    Of course the ultimate fallback content could be a link to a video to download and view in a helper app, but that’s not “HTML5 video” and it is user-hostile (profoundly so on mobile). Flash fallback does manage to blend in with HTML5, modulo the loss of expressiveness afforded by DHTML playback controls.

    Now, consider carefully where we are today.

    Firefox supports only unencumbered formats from Gecko’s <video> implementation. We rely on Flash fallback that authors invariably write, as shown above. Let that sink in: we, Mozilla, rely on Flash to implement H.264 for Firefox users.

    Adobe has announced that it will not develop Flash on mobile devices.

    In spite of the early 2011 Google blog post, desktop Chrome still supports H.264 from <video>. Even if it were to drop that support, desktop Chrome has a custom patched Flash embedding, so the fallback shown above will work well for almost all users.

    Mobile matters most

    Android stock browsers (all Android versions), and Chrome on Android 4, all support H.264 from <video>. Given the devices that Android has targeted over its existence, where H.264 hardware decoding is by far the most power-efficient way to decode, how could this be otherwise? Google has to compete with Apple on mobile.

    Steve Jobs may have dealt the death-blow to Flash on mobile, but he also championed and invested in H.264, and asserted that “[a]ll video codecs are covered by patents”. Apple sells a lot of H.264-supporting hardware. That hardware in general, and specifically in video playback quality, is the gold standard.

    Google is in my opinion not going to ship mobile browsers this year or next that fail to play H.264 content that Apple plays perfectly. Whatever happens in the very long run, Mozilla can’t wait for such an event. Don’t ask Google why they bought On2 but failed to push WebM to the exclusion of H.264 on Android. The question answers itself.

    So even if desktop Chrome drops H.264 support, Chrome users almost to a person won’t notice, thanks to Flash fallback. And Apple and Google, along with Microsoft and whomever else might try to gain mobile market share, will continue to ship H.264 support on all their mobile OSes and devices — hardware-implemented H.264, because that uses far less battery than alternative decoders.

    Here is a chart of H.264 video in HTML5 content on the Web from MeFeedia:

    MeFeedia.com, December 2011

    And here are some charts showing the rise of mobile over desktop from The Economist:

    The Economist, October 2011

    These charts show Bell’s Law of Computer Classes in action. Bell’s Law predicts that the new class of computing devices will replace older ones.

    In the face of this shift, Mozilla must advance its mission to serve users above all other agendas, and to keep the Web — including the “Mobile Web” — open, interoperable, and evolving.

    What Mozilla is doing

    We have successfully launched Boot to Gecko (B2G) and we’re preparing to release a new and improved Firefox for Android, to carry our mission to mobile users.

    What should we do about H.264?

    Andreas Gal proposes to use OS- and hardware-based H.264 decoding capabilities on Android and B2G. That thread has run to over 240 messages, and spawned some online media coverage.

    Some say we should hold out longer for someone (Google? Adobe?) to change something to advance WebM over H.264.

    MozillaMemes.tumblr.com/post/19415247873

    Remember, dropping H.264 from <video> only on desktop and not on mobile doesn’t matter, because of Flash fallback.

    Others say we should hold out indefinitely and by ourselves, rather than integrate OS decoders for encumbered video.

    I’ve heard people blame software patents. I hate software patents too, but software isn’t even the issue on mobile. Fairly dedicated DSP hardware takes in bits and puts out pixels. H.264 decoding lives completely in hardware now.

    Yes, some hardware also supports WebM decoding, or will soon. Too little, too late for HTML5 <video> as deployed and consumed this year or (for shipping devices) next.

    As I wrote in the newsgroup thread, Mozilla has never ignored users or market share. We do not care only about market share, but ignoring usability and market share can easily lead to extinction. Without users our mission is meaningless and our ability to affect the evolution of open standards goes to zero.

    Clearly we have principles that prohibit us from abusing users for any end (e.g., by putting ads in Firefox’s user interface to make money to sustain ourselves). But we have never rejected encumbered formats handled by plugins, and OS-dependent H.264 decoding is not different in kind from Flash-dependent H.264 decoding in my view.

    We will not require anyone to pay for Firefox. We will not burden our downstream source redistributors with royalty fees. We may have to continue to fall back on Flash on some desktop OSes. I’ll write more when I know more about desktop H.264, specifically on Windows XP.

    What I do know for certain is this: H.264 is absolutely required right now to compete on mobile. I do not believe that we can reject H.264 content in Firefox on Android or in B2G and survive the shift to mobile.

    Losing a battle is a bitter experience. I won’t sugar-coat this pill. But we must swallow it if we are to succeed in our mobile initiatives. Failure on mobile is too likely to consign Mozilla to decline and irrelevance. So I am fully in favor of Andreas’s proposal.

    Our mission continues

    Our mission, to promote openness, innovation, and opportunity on the Web, matters more than ever. As I said at SXSW in 2007, it obligates us to develop and promote unencumbered video. We lost one battle, but the war goes on. We will always push for open, unencumbered standards first and foremost.

    In particular we must fight to keep WebRTC unencumbered. Mozilla and Opera also lost the earlier skirmish to mandate an unencumbered default format for HTML5 <video>, but WebRTC is a new front in the long war for an open and unencumbered Web.

    We are researching downloadable JS decoders via Broadway.js, but fully utilizing parallel and dedicated hardware from JS for battery-friendly decoding is a ways off.

    Can we win the long war? I don’t know if we’ll see a final victory, but we must fight on. Patents expire (remember the LZW patent?). They can be invalidated. (Netscape paid to do this to certain obnoxious patents, based on prior art.) They can be worked around. And patent law can be reformed.

    Mozilla is here for the long haul. We will never give up, never surrender.

    /be

    [1] Some points about WebM on YouTube vs. H.264:

    • Google has at best transcoded only about half the videos into WebM. E.g., this YouTube search for “cat” gives ~1.8M results, while the same one for WebM videos gives 704K results.
    • WebM on YouTube is presented only for videos that lack ads, which is a shrinking number on YouTube. Anything monetizable (i.e., popular) has ads and therefore is served as H.264.
    • All this is moot when you consider mobile, since there is no Flash on mobile, and as of yet no WebM hardware, and Apple’s market-leading position.

  3. Firefox, YouTube and WebM

    Five important items of note today relating to Mozilla’s support for the VP8 codec:

    1. Google will be releasing VP8 under an open source and royalty-free basis. VP8 is a high-quality video codec that Google acquired when they purchased the company On2. The VP8 codec represents a vast improvement in quality-per-bit over Theora and is comparable in quality to H.264.

    2. The VP8 codec will be combined with the Vorbis audio codec and a subset of the Matroska container format to build a new standard for Open Video on the web called WebM. You can find out more about the project at its new site: http://www.webmproject.org/.

    3. We will include support for WebM in Firefox. You can get super-early WebM builds of Firefox 4 pre-alpha today. WebM will also be included in Google Chrome and Opera.

    4. Every video on YouTube will be transcoded into WebM. They have about 1.2 million videos available today and will be working through their back catalog over time. But they have committed to supporting everything.

    5. This is something that is supported by many partners, not just Google and others. Content providers like Brightcove have signed up to support WebM as part of a full HTML5 video solution. Hardware companies, encoding providers and other parts of the video stack are all part of the list of companies backing WebM. Even Adobe will be supporting WebM in Flash. Firefox, with its market share and principled leadership and YouTube, with its video reach are the most important partners in this solution, but we are only a small part of the larger ecosystem of video.

    We’re extremely excited to see Google joining us to support Open Video. They are making technology available on terms consistent with the Open Web and the W3C Royalty-Free licensing terms. And – most importantly – they are committing to support a full open video stack on the world’s largest video site. This changes the landscape for video and moves the baseline for what other sites have to do to maintain parity and keep up with upcoming advances in video technology, not to mention compatibility with the set of browsers that are growing their userbase and advancing technology on the web.

    At Mozilla, we’ve wanted video on the web to move as fast as the rest of the web. That has required a baseline of open technology to build on. Theora was a good start, but VP8 is better. Expect us to start pushing on video innovation with vigor. We’ll innovate like the web has, moving from the edges in, with dozens of small revolutions that add up to something larger than the sum of those parts. VP8 is one of those pieces, HTML5 is another. If you watch this weblog, you can start to see those other pieces starting to emerge as well. The web is creeping into more and more technologies, with Firefox leading the way. We intend to keep leading the web beyond HTML5 to the next place it needs to be.

    Today is a day of great change. Tomorrow will be another.

  4. 3D transforms in Firefox 3.5 – the isocube

    This demo was created by Zachary Johnson, a Minneapolis, MN based web developer who has also authored a jQuery plugin for animated “3D” rotation.

    I’d like to show an example of a visual effect that can be accomplished using the new -moz-transform CSS transformation property that is available in the Firefox 3.5 browser. I was very excited to see this feature added to Firefox because I knew it would allow me to produce a faux-3D isometric effect, also sometimes called 2.5D. I created a demo where HTML content is mapped onto the faces of a “3D” isometric cube:

    The -moz-transform property can take a list of CSS transform functions including rotate, scale, skew, and translate. Or, multiple transformations can be done simultaneously using a single 2D affine transformation matrix. Because all of the CSS transformation functions work in two dimensions, true 3D transformations with perspective projection cannot yet be accomplished. In this demo, I use the rotate and skew transformation functions in order to create the isometric effect.

    First, I define a container div for the cube, and then a square div for the top, left, and right faces of the cube.

    .cube {
        position: absolute;
    }
    
    .face {
        position: absolute;
        width: 200px;
        height: 200px;
    }
    

    Without getting too wrapped up in isometric projection math, we have to rotate and skew the cube faces in order to turn each square div into a parallelogram where the angles of the vertices are 60° or 120°. Here are what those transformations look like using the new -moz-transform CSS property:

    .top {
        -moz-transform: rotate(-45deg) skew(15deg, 15deg);
    }
    
    .left {
        -moz-transform: rotate(15deg) skew(15deg, 15deg);
    }
    
    .right {
        -moz-transform: rotate(-15deg) skew(-15deg, -15deg);
    }
    

    Now all that is left to do is to use absolute positioning to connect each transformed div in order to form the faces of the isometric cube. You can use matrix math to do this, or you could just move the faces around until they line up and it looks right.

    To add to the 3D effect, I’ve shaded the isometric cube by assigning different colors to the faces. I also gave the cube a shadow. You can see that the shadow is basically just a copy of the top face of the cube moved down to the bottom left, and then I gave the shadow div a black background color and set opacity: 0.5; in the CSS to make it filter over the page background.

    Any HTML, such as the text and form buttons in this example, can be put inside the div for any face of the cube. It will be translated into the correct perspective. Christopher Blizzard gave me the idea to throw video onto one side of the cube using the new HTML5 video tag now supported in Firefox 3.5. As you can see, it works great.

    Finally, by making a copy of the cube markup and using two more of the CSS transformation functions, I was able to create a second cube. I made the cube 50% as big using a scale(0.5) transformation, and I moved it into place by using translate(600px, 400px).

    I hope you found this demo to be interesting and that it opened up your mind to some exciting possibilities that Firefox 3.5 CSS transformations bring to the web.

    CSS transformations are also supported by Safari 3.1+ and Chrome using the -webkit-transform CSS property.

  5. html5 video fallbacks with markup

    In a previous post on this blog we talked about using JavaScript to create video elements on the fly. While that was a good use case for the Mozilla’s support site in this post we offer another set of methods that will likely find more use on the web. In fact you can see it in use on Mozilla’s What’s New in Firefox 3.5 video, this blog and we’re starting to see pickup on the web as a whole.

    The examples here should work with Safari 4, Firefox 3.5 and IE using a flash fallback. If you’re looking for a complete example that includes support for the video element, quicktime, windows media, iPhone support, and finally flash support you might try checking out Video for Everybody. It’s been tested with a huge pile of browsers in different configurations and is a good start point if you’re looking for support for everything under the universe.

    You might also check out Michael Verdi’s video tags with fallbacks article where he uses a much simpler, but somewhat easier to understand method.

    A simple example

    The video tag looks something like this in a web page:

    
    

    What this does is insert a video element into your page. Firefox 3.5 will load the video, determine its size and re-size the element to the natural size of the video.

    Firefox 3.5 supports the Ogg Theora video format, a free and open standard for video. Opera and Google Chrome will also include support for Theora in later versions. Safari 4 can also support the same format when using the Xiph Quicktime component but is not guaranteed to be present. Apple has licensed the mpeg4 format for use in Safari 4 and it is supported by default.

    If you want to support both formats you need to be able to provide more than one type of format. You also need to change the markup to tell the browser about the types of the files, which order they are preferred and then let the browser choose which one it should use to display the video element. For our example the code would look something like this:

    
    

    In this case the browser will first check to see if it supports the video/ogg video type. If it does, it will use that and load it. If it doesn’t it will move on to the next entry, the mp4 file and use it if it’s supported.

    While most modern browsers have specific plans to support the video element, Microsoft has not made their intentions clear. So in order to support IE users into the near future you should provide fallbacks until Microsoft adopts this part of the new HTML5 standard. For our example we’ll use a simple flash fallback.

    One nice thing about the video element is how easily it degrades to older browsers. If in the above example your browser doesn’t support the video tag and doesn’t support the source tag it will simple fall back to whatever happened to be included inside them. This makes it very easy to support fallbacks in a very easy way. Here’s what a flash fallback would look like that uses blip.tv:

    
    

    And that’s it. This example supports all browsers, degrades nicely and helps to move the web forward all in one blow. HTML5 isn’t finished, but web developers can certainly start taking advantage of the features that it provides in modern browsers today.

    The full example is embedded below.

  6. getUserMedia is ready to roll!

    We blogged about some of our WebRTC efforts back in April. Today we have an exciting update for you on that front: getUserMedia has landed on mozilla-central! This means you will be able to use the API on the latest Nightly versions of Firefox, and it will eventually make its way to a release build.

    getUserMedia is a DOM API that allows web pages to obtain video and audio input, for instance, from a webcam or microphone. We hope this will open the possibility of building a whole new class of web pages and applications. This DOM API is one component of the WebRTC project, which also includes APIs for peer-to-peer communication channels that will enable exchange of video steams, audio streams and arbitrary data.

    We’re still working on the PeerConnection API, but getUserMedia is a great first step in the progression towards full WebRTC support in Firefox! We’ve certainly come a long way since the first image from a webcam appeared on a web page via a DOM API. (Not to mention audio recording support in Jetpack before that.)

    We’ve implemented a prefixed version of the “Media Capture and Streams” standard being developed at the W3C. Not all portions of the specification have been implemented yet; most notably, we do not support the Constraints API (which allows the caller to request certain types of audio and video based on various parameters).

    We have also implemented a Mozilla specific extension to the API: the first argument to mozGetUserMedia is a dictionary that will also accept the property {picture: true} in addition to {video: true} or {audio: true}. The picture API is an experiment to see if there is interest in a dedicated mechanism to obtain a single picture from the user’s camera, without having to set up a video stream. This could be useful in a profile picture upload page, or a photo sharing application, for example.

    Without further ado, let’s start with a simple example! Make sure to create a pref named “media.navigator.enabled” and set it to true via about:config first. We’ve put the pref in place because we haven’t implemented a permissions model or any UI for prompting the user to authorize access to the camera or microphone. This release of the API is aimed at developers, and we’ll enable the pref by default after we have a permission model and UI that we’re happy with.

    There’s also a demo page where you can test the audio, video and picture capabilities of the API. Give it a whirl, and let us know what you think! We’re especially interested in feedback from the web developer community about the API and whether it will meet your use cases. You can leave comments on this post, or on the dev-media mailing list or newsgroup.

    We encourage you to get involved with the project – there’s a lot of information about our ongoing efforts on the project wiki page. Posting on the mailing list with your questions, comments and suggestions is great way to get started. We also hang out on the #media IRC channel, feel free to drop in for an informal chat.

    Happy hacking!

  7. theora 1.1 is released – what you should know

    Less than a year after the release of Theora 1.0, the wonderful people at Xiph have released Theora 1.1. The 1.1 release is a software-only release of the Theora encoder and decoder. It does not include any changes to the Theora format. Existing Theora videos should continue to play with the new decoder and the new Theora encoder generates bitstreams that will work in existing players that can play Theora content.

    The 1.1 release is largely an improvement to the Theora encoder. This post will attempt to give people a high-level overview of the changes and what they mean to web developers and people who are thinking of deploying Theora to support HTML5 video. Theora is an important technology to web developers – it’s the only competitive codec that currently complies with the W3C patent policy.

    Here’s a quick list of important things that have changed in this release. We’ll go into more detail on each of these items.

    • Video quality between Theora 1.0 and Theora 1.1 has been improved.
    • Rate control for live streaming now works well.
    • A two-pass mode has been added to the encoder that can create rate controlled videos with very predictable bandwidth requirements.
    • CPU usage during encoding is much more consistent.
    • Decoder performance has been improved.

    Video quality between Theora 1.0 and Theora 1.1 has been improved.

    One issue that people had with the Theora 1.0 encoder was that it produced video that appeared fuzzy. The 1.1 improvements are clear in these two images provided by Monty, one of the Xiph Developers. Open each of these images in new tabs and flip between them. You can really see the difference.

    This was also very visible at the edges of text. Here’s an example taken from one of our Firefox 3.5 promotional videos. The first is with the 1.0 encoder (9.0MB) and the second is with the 1.1 encoder (8.2MB). You will notice that not only are the edges more defined but there’s a lot less noise in the area around the edges of the text. Once again, if you open them in tabs and flip between them you can see the difference.

    Note that the original video is nearly 17MB. That was done largely to get the text crisp. With these changes we can likely use a much lower-bandwidth version of the video, probably as small as 9.9MB. That’s a pretty big difference.

    Note that we’re talking about an improvement of quality at the same video bitrate. This means that we’re either able to produce higher quality videos at the same file size or we’re able to reduce the file size and keep the same quality – either way it’s a big win.

    Rate control for live streaming now works well.

    Before describing this change, something important must be described. This is the difference between videos encoded with a variable bitrate (VBR) and a constant bitrate (CBR).

    In variable bitrate encoding the amount of data that’s required to represent the difference between two frames in a video is allowed to grow. This happens most often when shifting from a scene where there isn’t much movement to a scene where there’s a lot of motion. You could easily go from requiring 40Kb/sec to 400Kb/sec because the entire background moves.

    In constant bitrate encoding the amount of data that you’re allowed to use to represent a change from one frame to the next is pinned at some maximum value. If you’ve got a low maximum value and there’s a set of frames that requires a lot of bits to represent the changes from one to the next you will need to sacrifice something in order to stay inside of that maximum value. Very often it’s some amount of video quality or the encoder will start dropping frames in order to keep under the watermark.

    This leads to a pretty simple rule: If you want the highest quality video possible, you should be using variable rate encoding. This means that when you’re encoding a video you should be using quality settings (0-99, low/medium/high, 1-10) instead of picking bitrates (60Kb/sec, 200Kb/sec.) For most use cases on the web VBR-encoded videos actually work very well because users are allowed to buffer quite a bit of video out ahead of their current position so these bursts of data don’t affect the user’s experience.

    But there are some use cases where having a constant bitrate is very important. These include:

    1. Live, low delay streaming over HTTP with a lot of clients.
    2. Streaming large files where a large read-ahead buffer is not desired.
    3. Situations where large bursts of data result in large bursts in CPU to handle them.

    For live, low-delay streaming over HTTP it’s important to realize what happens when there’s a sudden burst of data to handle. HTTP runs over TCP. In TCP it takes a while for a connection to increase its bandwidth. (And by “a while” I mean “not that long” but it’s long enough to affect the low latency connection that we want for this use case. This is why many low-latency applications don’t use TCP. But we’re talking about delivering video over HTTP.) If you’ve got a big burst of data and the TCP window takes a long time to open up you start building up a big send buffer on the server. (And remember in this use case you’ve got a lot of clients connected!) That requires a lot of memory to hold the send buffers for each client. What happens then is that servers will start closing connections en masse because it needs to save memory or because it thinks that the client has become somehow unreachable. This is made worse by the fact that even if the connection scales up and then scales back down it re-settles at the low rate and the process has to be repeated. The user’s experience is that the video stream stops and restarts or just stops working altogether when the server hangs up. The solution? Using a constant rate that doesn’t require the TCP window to open up suddenly and doesn’t require large send buffers for each client.

    For the use case where you’re streaming large files it might not be reasonable for the client to cache a lot of data. You also might be serving up a lot of data to a lot of clients and you might want to avoid the large send buffer problem as well, just for different reasons.

    And for the last use case where you’re in a CPU-constrained environment the bursting nature of variable bitrate videos means it often takes a large bursts of CPU to handle those bursts. While CPUs do burst up faster than TCP does, you might be talking to constrained processors (think mobile) or you might be serving up files near HD-sized content, which CPUs often struggle to decode.

    In any case there are a number of use cases for constant bitrate encoding. Back to the question of what’s improved in Theora 1.1.

    In Theora 1.0 the rate controlled encoding mode was very very broken. This resulted in two things:

    1. People trying to do live streaming ran into problems.
    2. People who used rate controlled settings to compare overall Theora quality to the quality of other encoders saw worse results than the format actually represented.

    The first issue is clear – it was broken, it should be fixed. And it has been. The new encoder does a pretty good job of maintaining bitrates, changes quality on the fly, drops frames and even includes a “soft-target” mode so that bitrates can fluctuate a little bit to maintain quality while occasionally breaking the bandwidth rules.

    The encoder also has a wonderful new piece of functionality that people will find very useful. It’s now possible to specify a maximum rate ceiling for video encoding while also specifying a minimum quality floor. What this means is that the encoder will try and maintain very crisp video frames within rate constraints. This means that it will aggressively drop frames instead of creating frame deltas that are fuzzy or low-quality. While this might sound like a poor trade-off it’s actually very useful. If you’re showing a live video of a presentation you usually want a crisp video of the slides and having a lower frame update rate is very acceptable.

    The second issue that was caused by the bad rate control in Theora 1.0 is an issue of marketing. People would often use the encoder with the fixed bitrate mode instead of the quality mode and dismiss the results as a reflection of the format instead of problems with the encoder. We hope that people find better results with the new encoder.

    A two-pass mode has been added to the encoder that can create rate controlled videos with very predictable bandwidth requirements.

    In addition to fixing the single pass rate controlled encoder in 1.1 a two-pass encoding option has been added. This means that if you are transcoding a file (as opposed to doing a live stream) you can create a very consistent bitrate in a file if you want. This is because the encoder can look ahead in the stream to allocate bits efficiently. Monty from Xiph made a graph that shows one example of the bitrate in a file with one pass and two pass.

    Above: graph of quantizer choice (which roughly corresponds to the encoding quality) when using default two-pass bitrate management as opposed to one-pass (with –soft-target) when encoding [the Matrix movie clip] at 300kbps. Both encodes resulted in the same bitrate. The quality of a one-pass encode varies considerably as the encoder has no way to plan ahead.

    CPU usage during encoding is much more consistent.

    People who were doing live streaming often saw huge spikes in CPU usage during high-motion events. This has been fixed and now CPU usage is much more consistent during single pass rate constrained encoding making it much easier to live stream video.

    Decoder performance has been improved.

    And last but not least the decoder has been made faster during the 1.1 release. How much faster depends quite a bit on the clip, but people are reporting that the new encoder is anywhere from 1.5-2x faster than the 1.0 of release of libtheora.

    Coming soon to a product near you.

    This release is a library release. It’s not a product in itself, but is instead something that other products include. So over the next days and weeks we’ll see other products pick up and start using this as part of their releases.

    Enjoy!

  8. using HTML5 video with fallbacks to other formats

    The Mozilla Support Project and support.mozilla.com (SUMO for short) is an open and volunteer powered community that helps over 3 million Firefox users a week get support and help with their favorite browser. The Firefox support community maintains a knowledge base with articles in over 30 languages and works directly with users through our support forums or live chat service. They’ve put together the following demonstration of how to use open video and Flash-based video at the same time to provide embedded videos to users regardless of their browser. This demo article was written by Laura Thomson, Cheng Wang and Eric Cooper

    Note: This article shows how to add video objects to a page using JavaScript. For most pages we suggest that you read the article that contains information on how to use the video tag to provide simple markup fallbacks. Markup-based fallbacks are much more elegant than JavaScript solutions and are generally recommended for use on the web.

    One of the challenges to open video adoption on the web is making sure that online video still performs well on browsers that don’t currently support open video.  Rather than asking users with these browsers to download the video file and use a separate viewer, the new <video> tag degrades gracefully to allow web developers to provide a good experience for everyone.  As Firefox 3.5 upgrades the web, users in this transitional period can be shown open video or video using an existing plugin in an entirely seamless way depending on what their browser supports.

    At SUMO, we use this system to provide screencasts of problem-solving steps such as in the article on how to make Firefox the default browser.

    If you visit the page using Firefox 3.5, or another browser with open video support, and click on the “Watch a video of these instructions” link, you get a screencast using an Ogg-formatted file.  If you visit with a browser that does not support <video> you get the exact same user experience using an open source .flv player and the same video encoded in the .flv format, or in some cases using the SWF Flash format.  These alternate viewing methods use the virtually ubiquitous Adobe Flash plugin which is one of the most common ways to show video on the web.

    The code works as follows.

    In the page which contains the screencasts, we include some JavaScript.   Excerpts from this code follow, but you can see or check out the complete listing from Mozilla SVN.

    The code begins by setting up an object to represent the player:

    Screencasts.Player = {
    width: 640,
    height: 480,
    thumbnails: [],
    priority: { 'ogg': 1, 'swf': 2, 'flv': 3 },
    handlers: { 'swf': 'Swf', 'flv': 'Flash', 'ogg': 'Ogg' },
    properNouns: { 'swf': 'Flash', 'flv': 'Flash', 'ogg': 'Ogg Theora' },
    canUseVideo: false,
    isThumb: false,
    thumbWidth: 160,
    thumbHeight: 120
    };
    

    We allocate a priority to each of the possible video formats.  You’ll notice we also have the 'canUseVideo' attribute, which defaults to false.

    Later on in the code, we test the user’s browser to see if it is video-capable:

    var detect = document.createElement('video');
    if (typeof detect.canPlayType === 'function' &&
        detect.canPlayType('video/ogg;codecs="theora,vorbis"') == 'probably' ) {
          Screencasts.Player.canUseVideo = true;
          Screencasts.Player.priority.ogg =
              Screencasts.Player.priority.flv + 2
    }
    

    If we can create a video element and it indicates that it can play the Ogg Theora format we set canUseVideo to true and increase the priority of the Ogg file to be greater than the priority of the .flv file. (Note that you could also detect if the browser could play .mp4 files to support Safari out of the box.)

    Finally, we use the priority to select which file is actually played, by iterating through the list of files to find the one that has the highest priority:

    for (var x=0; x < file.length; x++) {
      if (!best ) {
        best = file[x];
      } else {
        if (this.priority[best] < this.priority[file[x]])
          best = file[x];
      }
    }
    

    With these parts in place, the browser displays only the highest priority video and in a format that it can handle.

    If you want to learn more about the Mozilla Support Project or get involved in helping Firefox users, check out their guide to getting started.

    Resources

    * Note: To view this demo using Safari 4 on Mac OSX, you will need to add Ogg support to Quicktime using the Xiph Quicktime plugin available from http://www.xiph.org/quicktime/download.html

  9. Firefox Development Highlights – H.264 & MP3 support on Windows, scoped stylesheets + more

    Time for the first look this year into the latest developments with Firefox. This is part of our Bleeding Edge and Firefox Development Highlights series, and most examples only work in Firefox Nightly (and could be subject to change).

    H.264 & MP3 support on Windows

    Firefox for Android and Firefox OS already support H.264 and MP3. We are also working on bringing these formats to Firefox Desktop. On Windows 7 and above, you can already test it by turning on the preference media.windows-media-foundation.enabled in about:config. Decoding is done on the OS side (no decoder included in Firefox source code, not like WebM or Ogg Theora). For Linux and Mac, work is in progress.

    The new Downloads panel has been enabled

    We have now enabled the new Downloads panel:

    Scoped style attribute

    It’s now possible to define scoped style elements. Usually, when we write a stylesheet, we use <style>...</style>, and CSS code is applied to the whole document. If the <style> tag is nested inside a node (let’s say a <div>), and the <style> tag includes the scoped attribute (<style scoped>), then the CSS code will apply only to a subtree of the document, starting with the parent node of the <style> element. The root of the subtree can also be referred via the :scope pseudo-class.

    Demo

    Scoped style demo support on JS Bin.

    Our friends over at HTML5Rocks have also written about it in A New Experimental Feature: scoped stylesheets.

    @supports and CSS.supports

    In Firefox 17, we shipped the @supports CSS at-rule. This lets you define specific CSS code only if some features are supported. For example:

    @supports not (display: flex) {
      /* If flex box model is not supported, we use a different layout */
      #main {
          width: 90%;
      }
    }
    

    In Firefox 20, it’s now possible to do the same thing, but within JavaScript:

    if (CSS.supports("display", "flex")) {
      // do something relying on flexbox
    }
  10. autobuffering video in Firefox

    John Gruber recently wrote up an article titled Why the HTML5 ‘Video’ Element Is Effectively Unusable, Even in the Browsers Which Support It

    He’s mostly upset that browsers don’t respect the autobuffer attribute. Or, really, that browsers autobuffer by default. Safari and Chrome do apparently autobuffer by default, but he incorrectly says that Firefox does as well. Just to be clear: Firefox does not autobuffer by default, nor does it autoplay by default. I’m not sure how his testing led him to believe that it does, but I wrote up some examples to show that it does not. Here are three examples:

    The best way to test this is to load your favorite browser and clear its cache. Then load your favorite network monitor. Then load the page without the autobuffer attribute set. You can see how much bandwidth the browser is actually using via your external tool.

    Of the browsers I tried, only Firefox gives useful feedback on how much is actually buffered as part of its native control set so you have to go to external tools to be able to tell what’s going on with anything else. If you load the page with two videos – one with autobuffer set and one without - you can mouse over them in Firefox and see how one has buffered and one hasn’t.

    The video on that test page is pretty big – around 115mb – so you can really tell when it’s buffering even on a fast network.

    Here’s what my testing shows:

    This is consistent with what I know about Safari. I talked with one of the Apple WebKit engineers a few months ago and he said that Safari for desktops buffered by default and said that Safari for mobile does not.

    There might be some confusion because Firefox does make a couple of network requests when you include a video tag. This is done for two reasons:

    • To get the first frame of the video and it’s size. It will only download a small part of the beginning of the video to get the first frame and then stop downloading.
    • If your server supports byte range requests, it seeks to the end of the video to try and determine the duration of the video. If your server doesn’t support byterange requests, we don’t get the duration. (Ogg was originally designed as a format for streaming, not static files and as such doesn’t include duration information in the header of the file, although there is work underway to add this functionality.)

    If you want to get a sense of how Firefox uses bandwidth, try this bandwidth test I wrote a few months ago. It uses a Firefox-only feature to read progress information on a video as it’s downloaded and give you a rough sense of the amount of bandwidth that particular video element is using. It shows a little traffic when you load the video and then shows how it starts using bandwidth once you press the play button.

    But this shows that we understand John’s point of view and we made the decision not to auto buffer by default for the very reasons he points out.