HTML5 Articles

Sort by:


  1. Weekly HTML5 Apps Developer Resources, August 12th 2012

    Weekly Resources for HTML5 Apps Developers



    Bonus Link

    If you find a link that you think should be included, please feel free to forward it to JStagner at

  2. It's Opus, it rocks and now it's an audio codec standard!

    In a great victory for open standards, the Internet Engineering Task Force (IETF) has just standardized Opus as RFC 6716.

    Opus is the first state of the art, free audio codec to be standardized. We think this will help us achieve wider adoption than prior royalty-free codecs like Speex and Vorbis. This spells the beginning of the end for proprietary formats, and we are now working on doing the same thing for video.

    There was both skepticism and outright opposition to this work when it was first proposed in the IETF over 3 years ago. However, the results have shown that we can create a better codec through collaboration, rather than competition between patented technologies. Open standards benefit both open source organizations and proprietary companies, and we have been successful working together to create one. Opus is the result of a collaboration between many organizations, including the IETF, Mozilla, Microsoft (through Skype), Xiph.Org, Octasic, Broadcom, and Google.

    A highly flexible codec

    Unlike previous audio codecs, which have typically focused on a narrow set of applications (either voice or music, in a narrow range of bitrates, for either real-time or storage applications), Opus is highly flexible. It can adaptively switch among:

    • Bitrates from 6 kb/s to 512 kb/s
    • Voice and music
    • Mono and stereo
    • Narrowband (8 kHz) to Fullband (48 kHz)
    • Frame sizes from 2.5 ms to 60 ms

    Most importantly, it can adapt seamlessly within these operating points. Doing all of this with proprietary codecs would require at least six different codecs. Opus replaces all of them, with better quality.
    Illustration of the quality of different codecs
    The specification is available in RFC 6716, which includes the reference implementation. Up-to-date software releases are also available.

    Some audio standards define a normative encoder, which cannot be improved after it is standardized. Others allow for flexibility in the encoder, but release an intentionally hobbled reference implementation to force you to license their proprietary encoders. For Opus, we chose to allow flexibility for future encoders, but we also made the best one we knew how and released that as the reference implementation, so everyone could use it. We will continue to improve it, and keep releasing those improvements as open source.

    Use cases

    Opus is primarily designed for use in interactive applications on the Internet, including voice over IP (VoIP), teleconferencing, in-game chatting, and even live, distributed music performances. The IETF recently decided with “strong consensus” to adopt Opus as a mandatory-to-implement (MTI) codec for WebRTC, an upcoming standard for real-time communication on the web. Despite the focus on low latency, Opus also excels at streaming and storage applications, beating existing high-delay codecs like Vorbis and HE-AAC. It’s great for internet radio, adaptive streaming, game sound effects, and much more.

    Although Opus is just out, it is already supported in many applications, such as Firefox, GStreamer, FFMpeg, foobar2000, K-Lite Codec Pack, and lavfilters, with upcoming support in VLC, rockbox and Mumble.

    For more information, visit the Opus website.

  3. Weekly HTML5 Apps Developer Resources, August 29th 2012

    Weekly Resources for HTML5 Apps Developers



    Bonus Link

    If you find a link that you think should be included, please feel free to forward it to JStagner at

  4. Opus Support for WebRTC

    Opus audio codec logo

    As we announced during the beta cycle, Firefox now supports the new Opus audio format. We expect Opus to be published as RFC 6716 any day now, and we’re starting to see Opus support pop up in more and more places. Momentum is really building.

    What does this mean for the web?

    Keeping the Internet an open platform is part of Mozilla’s mission. When the technology the Web needs doesn’t exist, we will invest the resources to create it, and release it royalty-free, just as we ask of others. Opus is one of these technologies.

    Mozilla employs two of the key authors and developers, and has invested significant legal resources into avoiding known patent thickets. It uses processes and methods that have been long known in the field and which are considered patent-free. As a result, Opus is available on a royalty-free basis and can be deployed by anyone, including other open-source projects. Everyone knows this is an incredibly challenging legal environment to operate in, but we think we’ve succeeded.

    Why Opus is important?

    The Opus support in the <audio> tag we’re shipping today is great. We think it’s as good or better than all the other codecs people use there, particularly in the voice modes, which people have been asking for for a long time. But our goals extend far beyond building a great codec for the <audio> tag.

    Mozilla is heavily involved in the new WebRTC standards to bring real-time communication to the Web. This is the real reason we made Opus, and why its low-delay features are so important. At the recent IETF meeting in Vancouver we achieved “strong consensus” to make Opus Mandatory To Implement (MTI) in WebRTC. Interoperability is even more important here than in the <audio> tag. If two browsers ship without any codecs in common, a website still has the option of encoding their content twice to be compatible with both. But that option isn’t available when the browsers are trying to talk to each other directly. So our success here is a big step in bringing interoperable real-time communication to the Web, using native Web technologies, without plug-ins.

    Illustration of the quality of different codecs

    Opus’s flexibility to scale to both very low bitrates and very high quality, and do all of it with very low delay, were instrumental in achieving this consensus. It would take at least six other codecs to satisfy all the use-cases Opus does. So try out Opus today for your podcasts, music broadcasts, games, and more. But look out for Opus in WebRTC coming soon.

  5. Mozilla and Games: Pushing the Limits of What’s Possible

    At Mozilla, we believe in the power and potential of the Web and want to see it thrive for everyone, everywhere.

    What We’ve Done

    We’re committed to building the infrastructure needed to keep the Web the most robust platform on the planet. Although its roots have been around for some time, Mozilla’s focus on games is a relatively new initative. We are focused on making Firefox the best game development platform possible.

    Check out BananaBread.

    The latest Firefox release includes all of the JavaScript and WebGL updates needed to produce this demo.

    BananaBread was developed by Mozilla to show our progress in action. We ported a complete C++ game engine to JavaScript using Emscripten. The original opensource engine is called Cube 2. It was designed to support first person shooters. Few believed porting a full, highly responsive game to JavaScript was an achievable goal. (We had our own doubts.) To our amazement, we found that we were able to build a demo that surpassed our highest expectations.

    The project required very few code modifications to the original game, which demonstrates that porting games to the Web does not have to be difficult.

    Learn more about Emscripten.

    New technologies for HTML5 games

    Here are a few technologies that have landed this year to advance our support for HTML5 games:

    • Game focused performance improvements to JavaScript, many inspired by games and demos that we saw on the Web or that developers sent to us for testing
    • Wide range of WebGL performance improvements
    • High precision timing
    • Compressed texture support on desktop
    • Smoother JavaScript execution on large code bases
    • Hardware acceleration of 2D canvas on desktop
    • FullScreen API
    • PointerLock API (special thanks to David Humphrey and students at Seneca College)
    • OrientationLock

    Firefox for desktops has come a long way in a short time. But there is still more to come. We are working on features that will improve performance and make development easier. We are also investigating options for porting to JavaScript from languages such as C# and Java.

    What’s Next

    Our focus for the first half of 2012 was Firefox for Windows, Mac and Linux, and while we continue to make improvements there, our focus for the second half of the year will include Firefox for Android and Firefox OS. There are hard challenges ahead but we are excited to deliver the maximum potential HTML5 has to offer, both in features and performance.

    One of the main goals of the Mozilla Community working on games is to not only drive game development on Firefox but across all browsers. Any browser that has implemented the necessary modern Web standards used by the BananaBread demo can run it. These efforts help us stay in touch with how HTML5 is coming together and see opportunities where we can make developers’ lives easier. Hearing directly from the HTML5 game developer community is a key part of how we learn what needs to be done.

    I hope you’ll come and join us in raising the bar on what’s possible!

    You can join the conversation on our IRC server at, channel #games.

    Or sign up for the mailing list at

  6. Weekly HTML5 Apps Developer Resources, August 15th 2012

    Weekly Resources for HTML5 Apps Developers



    Bonus Link

    If you find a link that you think should be included, please feel free to forward it to JStagner at

  7. Video and slides from JavaScript APIs – The Web is the Platform presentation at the .toster conference

    Back in May, I was lucky to go to Moscow, Russia, to speak at the .toster conference about JavaScript APIs and the web as a platform. Now I have the video together with the slides to share from that presentation.

    I cover a lot of various JavaScript APIs, possibilities on the web and also new WebAPIs!

    The video for JavaScript APIs – The Web is the Platform is available on YouTube. It is also embedded below:

    (If you’ve opted in to HTML5 video on YouTube you will get that, otherwise it will fallback to Flash)

    The slides that I use in the presentation are available on SlideShare, and also embedded here:

  8. Weekly HTML5 Apps Developer Resources, August 1st 2012

    Weekly Resources for HTML5 Apps Developers



    Bonus Link

    If you find a link that you think should be included, please feel free to forward it to JStagner at

  9. Report from San Francisco Gigabit Hack Days with US Ignite

    This past weekend, the Internet Archive played host to a crew of futurist hackers for the San Francisco Gigabit Hack Days.

    The two-day event, organized by Mozilla and the City of San Francisco, was a space for hackers and civic innovators to do some experiments around the potential of community fiber and gigabit networks.


    The event kicked off Saturday morning with words from Ben Moskowitz and Will Barkis of Mozilla, followed by the Archive’s founder Brewster Kahle. Brewster talked a bit about how the Archive was imagined and built as a “temple to the Internet.”

    San Francisco’s Chief Innovation Officer, Jay Nath, talked about the growing practice of hackdays in the city and the untapped potential of the City’s community broadband network. The SF community broadband network is a 1Gbps network that provides Internet access for a wide range of community sites within San Francisco—including public housing sites, public libraries, city buildings and more.

    These partners are eager to engage with developers to use the network as a testbed for high bandwidth applications, so we quickly broke off to brainstorm possible hacks.

    Among the proposals: an app to aggregate and analyze campaign ads from battleground states; apps to distribute crisis response; local community archives; 3D teleconferencing for education and medicine; “macro-visualization” of video, and fast repositories of 3D content. Read on for more details.


    Ralf Muehlen, network engineer, community broadband instigator, and all-around handyman at the Archive prepared for the event in a few very cool ways—unspooling many meters of gigabit ethernet cable for hackers, and provisioning a special “10Gbps desktop.”

    The 10Gbps desktop in action


    The 10Gbps desktop was a server rack with an industrial strength network card, connected to raw fiber and running Ubuntu. While not a very sensible test machine, the 10Gbps desktop was an awesome way to stress the limits of networks, hardware, and software clients. Video hackers Kate Hudson, Michael Dale, and Jan Gerber created a video wall experiment to simultaneously load 100 videos from the Internet Archive, weighing in at about 5Mbps each. On this machine, unsurprisingly, the main bottleneck was the graphics card. Casual testing revealed that Firefox does a pretty good job of caching and loading tons of media where other browsers choked or crashed, though its codec support is not as broad, making these kinds of experiments difficult.


    Here are some of the results of the event:

    Macro-visualization of video

    Kate Hudson, Michael Dale, and Jan Gerber created an app that queues the most popular stories on Google News and generates a video wall.

    The wall is created by searching the Archive’s video collection by captions and finding matches. Imagined as a way of analyzing different types of coverage around the same issue, the app has a nice bonus feature: real-time webcam chat, in browser, using WebRTC. If two users are hovered over the same video, they’re dropped into an instant video chat room, ChatRoulette-style.

    The demo uses some special capabilities of the Archive and can’t be tested at home just yet, but we’re looking to get the code online as soon as possible.

    Scalable 3D content delivery

    As Jeff Terrace writes in his post-event blog: “3D models can be quite big. Games usually ship a big DVD full of content or make you download several gigabytes worth of content before you can start playing… [by] contrast, putting 3D applications on the web demands low-latency start times.”

    Jeff and Henrik Bennetsen, who work on federated 3D repositories, wanted to showcase the types of applications that can be built with online 3D repositories and fast connections. So they hacked an “import” button into ThreeFab, the three.js scene editor.

    Using Jeff’s hack, users can load models asynchronously in the background directly from repositories like Open3DHub (CORS headers are required for security reasons). The models are seamlessly loaded from across the web and added into the current scene.

    This made for an awesome and thought-provoking line of inquiry—what kind of apps and economies can we imagine using 3D modeling, manipulation, and printing across fast networks? Can 3D applications be as distributed as typical web applications tend to be?

    Bonus: as a result of the weekend, the Internet Archive is working on enabling CORS headers for its own content, so hopefully we will be able to load 3D/WebGL content directly from the Archive soon.

    3D videoconferencing using point cloud streams

    XB PointStream loads data from Radiohead's House of Cards music video

    Andor Salga, author of an excellent JS library called XB PointStream, wanted to see if fast networks could enable 3D videoconferencing.

    Point clouds are 3D objects represented through volumetric points rather than mesh polygons. They’re interesting to graphics professionals for a number of reasons—for one, they can have very, very high-resolution and appear very life-like.

    Interestingly, sensor arrays like the low-cost Microsoft Kinect can be used to generate point cloud meshes on the cheap, by taking steroscopic “depth images” along with infrared. (It may sound far out, but it’s the basis for a whole new wave of motion-controlled videogames).

    Using Kinect sensors and WebGL, it should be possible to create a 3D videoconferencing system on the cheap. Users on either end would be able to pan around a 3D model of a person they’re connected to, almost like a hologram.

    This type of 3D video conferencing would be able to communicate depth information in a way that traditional video calls can’t. Additionally, these kinds of meetings could be recorded, then played back with camera interaction, allowing users to get different perspectives of the meetings. Just imagine the applications in the health and education sectors.

    For his hack, Andor and a few others wanted to prototype a virtual classroom that would—for instance—enable scientists at San Francisco’s Exploratoreum to teach kids at community sites connected to San Francisco’s community broadband network.

    After looking at a few different ways of connecting the Kinect to the browser, it appeared that startup Zigfu offers the best available option: a browser plugin that provides an API to Kinect hardware. San-Francisco native Amir Hirsch, founder of Zigfu, caught word of the event and came by to help. The plan was to use Websockets to sync the data between two users of this theoretical system. The team didn’t get a chance to complete the prototype by the end of the weekend, but will keep hacking.

    Point clouds are typically very large data sets. Especially if they are dynamic, a huge amount of data must transfer from one system to another very quickly. Without very fast networks, this kind of application would not be possible.

    Other hacks

    Overall, this was a fantastic event for enabling the Internet Archive to become a neighborhood cloud for San Francisco, experimenting on the sharp edges of the internet, and building community in SF. A real highlight was to see 16-year-old Kevin Gil from BAVC’s Open Source track lead a group of teenaged hackers in creating an all-new campaign ad uploader interface for the Archive—quite impressive for any weekend hack, let alone one by a team of young webmakers.

    From left: Ralf Muehlen, Tim Pozar, and Mike McCarthy, the minds behind San Francisco's community broadband network

    Thank everyone for spending time with us on a beautiful SF weekend, and see you next time!

    Get Involved

    If you’re interested in the future of web applications, fast networks, and the Internet generally, check out Mozilla Ignite.

    Now through the end of summer, you can submit ideas for “apps from the future” that use edge web technologies and fast networks. The best ideas will earn awards from a $15k prize pool.

    Starting in September, you can apply into the Mozilla Ignite apps challenge, with $485,000 in prizes for apps that demonstrate the potential of fast networks.

    Check out the site, follow us @mozillaignite, and let us know where you see the web going in the next 10 years!

  10. Taking About:Home Snippets to the Next Level.

    If you are a Firefox user and you start the browser this morning or you type “about:home” in the URL bar we have a surprise for you. Instead of the Firefox logo you’ll see an animation celebrating the global spirit of community. This is just one of many planned enhancements to pages and mozilla communication channels.

    To give you a peek under the hood and insight as to how this was made, we asked Bruce MacFarlane from Particle a few questions about the animation. Particle has been working with the top silicon valley companies exploring ways to use HTML5 and CSS since their beginning. They have a unique blend of talent that hovers between creative development and engineering which allows projects like this possible.

    1) Hi there, when you get asked to do an animation like that, what is the creative process? Do you make sketches, animate in a tool like Flash or Maya or do you start right away with the code?

    I find it helpful to sit with a step-by-step list of what is going to happen and really try to visualize the entire animation, getting a feel for how it will most likely function under the hood. If there’s anything I’m not quite sure how to handle I’ll break that out into a separate proof of concept sandbox file where I can just focus on that one thing and make sure I get it right. After that it’s easy to put all the parts together.

    2) What is the main technology behind the animation? Canvas? CSS? And why did you choose it?

    The animation was done entirely in CSS to take advantage of it’s ease of use and fluidity.
    Although something similar could have been accomplished with Canvas, going that route would have required a lot more setup code and we wouldn’t have been able to take advantage of CSS3’s handy timing functions.

    Also, being able to fire animation sequences with class names the same way you would work with basic styling is a huge plus.

    In the case of this animation we were able to define just one keyframe that played with scaling and opacity, then applied that to all five flame layers, having the variations coming from just playing with the animation durations and delays.

    3) I assume that the start page of Firefox is a special environment. What were the limitations you had to work around in terms of performance and connectivity?

    Since the code that runs the animation is being loaded onto a pre-existing page, we do end up having to do a little bit of logic when it loads to adjust some of the initial content and move our new content into position before running the animation.

    A Snippet is a single file of code that gets loaded onto a pre-existing page, therefor certain considerations have to be made. To remove additional requests for assets, any images have to be base64 encoded and entered directly in the code. This is much easier than it sounds. There are plenty of websites out there where you can upload an image and have it give you back it’s base64 encoding. I tend to use a couple of simple PHP commands that accomplish the same thing:

    $image = file_get_contents('image.png');
    $imagedata = base64_encode($image);

    Also if an animation has to interact with content that’s already on the screen (and may be positioned relatively, which can cause problems with animations) a certain amount of work has to be done to structure the previous content to work with the new content (wrapping the moving parts in a container that can still move in relation to window size so you can absolutely position elements within that you need to, etc.).

    4) When optimising you sometimes find yourself having to do the same tasks over and over again and build tools to do them. Did you use already existing tools or are you building some?

    Developing on Macs we end up using the ‘Activity Monitor’ utility app quite a lot which can really help with monitoring memory and CPU usage. About:home snippets are a unique animal in that only Firefox users ever see them. For sites and web applications that are being viewed on a diversity of devices and browsers we have developed some internal tools that help simplify compatibility issues and greatly aid in the quick construction of projects that we’ll be releasing soon.

    5) With your experience in this, do you think there is a market for animation tools that have output like the code you did? Do you know of any yet? There were a few around a year ago (Adobe Edge, Animatable…), but I haven’t seen any being pushed heavily.

    There are definitely some animations where the keyframes can get so complex that these sorts of tools could help out a lot and save time. We don’t have a favorite yet, but we’re keeping our eyes open.

    6) What about legacy support? Is it worth testing and tweaking on older browsers or does it make more sense to have a static image fallback?

    It is always important to support a certain amount of legacy browsers, static image fallbacks can sometimes be your only reasonable choice.

    7) If developers wanted to start with their own similar animations, what would you say is the biggest time-sink to avoid? What are good shortcuts?

    It’s sometimes easy to get bogged down in animations that have a lot of moving parts, and go on for a long time. In those situations I find it helps to break things down into scenes that can be fired off with parent container class names.
    Sometimes in those scenes I’ll apply the same animation duration to each element and work within the keyframe percentages to handle timing. This way it’s easy add adjustments to individual elements during the timeline later.

    Thanks Bruce.

    What do you think? Expect to see more experiments like this showing off new technologies. Anything you’d like to see, or information you’d want us to publish? Just comment below.