The state of Web Components

Web Components have been on developers’ radars for quite some time now. They were first introduced by Alex Russell at Fronteers Conference 2011. The concept shook the community up and became the topic of many future talks and discussions.

In 2013 a Web Components-based framework called Polymer was released by Google to kick the tires of these new APIs, get community feedback and add some sugar and opinion.

By now, 4 years on, Web Components should be everywhere, but in reality Chrome is the only browser with ‘some version’ of Web Components. Even with polyfills it’s clear Web Components won’t be fully embraced by the community until the majority of browsers are on-board.

Why has this taken so long?

To cut a long story short, vendors couldn’t agree.

Web Components were a Google effort and little negotiation was made with other browsers before shipping. Like most negotiations in life, parties that don’t feel involved lack enthusiasm and tend not to agree.

Web Components were an ambitious proposal. Initial APIs were high-level and complex to implement (albeit for good reasons), which only added to contention and disagreement between vendors.

Google pushed forward, they sought feedback, gained community buy-in; but in hindsight, before other vendors shipped, usability was blocked.

Polyfills meant theoretically Web Components could work on browsers that hadn’t yet implemented, but these have never been accepted as ‘suitable for production’.

Aside from all this, Microsoft haven’t been in a position to add many new DOM APIs due to the Edge work (nearing completion). And Apple, have been focusing on alternative features for Safari.

Custom Elements

Of all the Web Components technologies, Custom Elements have been the least contentious. There is general agreement on the value of being able to define how a piece of UI looks and behaves and being able to distribute that piece cross-browser and cross-framework.


The term ‘upgrade’ refers to when an element transforms from a plain old HTMLElement into a shiny custom element with its defined life-cycle and prototype. Today, when elements are upgraded, their createdCallback is called.

var proto = Object.create(HTMLElement.prototype);
proto.createdCallback = function() { ... };
document.registerElement('x-foo', { prototype: proto });

There are five proposals so far from multiple vendors; two stand out as holding the most promise.


An evolved version of the createdCallback pattern that works well with ES6 classes. The createdCallback concept lives on, but sub-classing is more conventional.

class MyEl extends HTMLElement {
  createdCallback() { ... }

document.registerElement("my-el", MyEl);

Like in today’s implementation, the custom element begins life as HTMLUnknownElement then some time later the prototype is swapped (or ‘swizzled’) with the registered prototype and the createdCallback is called.

The downside of this approach is that it’s different from how the platform itself behaves. Elements are ‘unknown’ at first, then transform into their final form at some point in the future, which can lead to developer confusion.

Synchronous constructor

The constructor registered by the developer is invoked by the parser at the point the custom element is created and inserted into the tree.

class MyEl extends HTMLElement {
  constructor() { ... }

document.registerElement("my-el", MyEl);

Although this seems sensible, it means that any custom elements in the initial downloaded document will fail to upgrade if the scripts that contain their registerElement definition are loaded asynchronously. This is not helpful heading into a world of asynchronous ES6 modules.

Additionally synchronous constructors come with platform issues related to .cloneNode().

A direction is expected to be decided by vendors at a face-to-face meeting in July 2015.


The is attribute gives developers the ability to layer the behaviour of a custom element on top of a standard built-in element.

<input type="text" is="my-text-input">

Arguments for

  1. Allows extending the built-in features of a element that aren’t exposed as primitives (eg. accessibility characteristics, <form> controls, <template>).
  2. They give means to ‘progressively enhance’ an element, so that it remains functional without JavaScript.

Arguments against

  1. Syntax is confusing.
  2. It side-steps the underlying problem that we’re missing many key accessibility primitives in the platform.
  3. It side-steps the underlying problem that we don’t have a way to properly extend built-in elements.
  4. Use-cases are limited; as soon as developers introduce Shadow DOM, they lose all built-in accessibility features.


It is generally agreed that is is a ‘wart’ on the Custom Elements spec. Google has already implemented is and sees it as a stop-gap until lower-level primitives are exposed. Right now Mozilla and Apple would rather ship a Custom Elements V1 sooner and address this problem properly in a V2 without polluting the platform with ‘warts’.

HTML as Custom Elements is a project by Domenic Denicola that attempts to rebuild built-in HTML elements with custom elements in an attempt to uncover DOM primitives the platform is missing.

Shadow DOM

Shadow DOM yielded the most contention by far between vendors. So much so that features had to be split into a ‘V1’ and ‘V2’ agenda to help reach agreement quicker.


Distribution is the phase whereby children of a shadow host get visually ‘projected’ into slots inside the host’s Shadow DOM. This is the feature that enables your component to make use of content the user nests inside it.

Current API

The current API is fully declarative. Within the Shadow DOM you can use special <content> elements to define where you want the host’s children to be visually inserted.

<content select="header"></content>

Both Apple and Microsoft pushed back on this approach due to concerns around complexity and performance.

A new Imperative API

Even at the face-to-face meeting, agreement couldn’t be made on a declarative API, so all vendors agreed to pursue an imperative solution.

All four vendors (Microsoft, Google, Apple and Mozilla) were tasked with specifying this new API before a July 2015 deadline. So far there have been three suggestions. The simplest of the three looks something like:

var shadow = host.createShadowRoot({
  distribute: function(nodes) {
    var slot = shadow.querySelector('content');
    for (var i = 0; i < nodes.length; i++) {

shadow.innerHTML = '<content></content>';

// Call initially ...

// then hook up to MutationObserver

The main obstacle is: timing. If the children of the host node change and we redistribute when the MutationObserver callback fires, asking for a layout property will return an incorrect result.

someElement.offsetTop; //=> old value

// distribute on mutation observer callback (async)

someElement.offsetTop; //=> new value

Calling offsetTop will perform a synchronous layout before distribution!

This might not seems like the end of the world, but scripts and browser internals often depend on the value of offsetTop being correct to perform many different operations, such as: scrolling elements into view.

If these problems can’t be solved we may see a retreat back to discussions over a declarative API. This will either be in the form of the current <content select> style, or the newly proposed ‘named slots’ API (from Apple).

A new Declarative API – ‘Named Slots’

The ‘named slots’ proposal is a simpler variation of the current ‘content select’ API, whereby the component user must explicitly label their content with the slot they wish it to be distributed to.

Shadow Root of <x-page>:

<slot name="header"></slot>
<slot name="footer"></slot>
<div>some shadow content</div>

Usage of <x-page>:

  <header slot="header">header</header>
  <footer slot="footer">footer</footer>
  <h1>my page title</h1>
  <p>my page content<p>

Composed/rendered tree (what the user sees):

  <header slot="header">header</header>
  <h1>my page title</h1>
  <p>my page content<p>
  <footer slot="footer">footer</footer>
  <div>some shadow content</div>

The browser has looked at the direct children of the shadow host (myXPage.children) and seen if any of them have a slot attribute that matches the name of a <slot> element in the host’s shadowRoot.

When a match is found, the node is visually ‘distributed’ in place of the corresponding <slot> element. Any children left undistributed at the end of this matching process are distributed to a default (unamed) <slot> element (if one exists).

  1. Distribution is more explicit, easier to understand, less ‘magic’.
  2. Distribution is simpler for the engine to compute.
  1. Doesn’t explain how built-in elements, like <select>, work.
  2. Decorating content with slot attributes is more work for the user.
  3. Less expressive.

‘closed’ vs. ‘open’

When a shadowRoot is ‘closed’ the it cannot be accessed via myHost.shadowRoot. This gives a component author some assurance that users won’t poke into implementation details, similar to how you can use closures to keep things private.

Apple felt strongly that this was an important feature that they would block on. They argued that implementation details should never be exposed to the outside world and that ‘closed’ mode would be a required feature when ‘isolated’ custom elements became a thing.

Google on the other hand felt that ‘closed’ shadow roots would prevent some accessibility and component tooling use-cases. They argued that it’s impossible to accidentally stumble into a shadowRoot and that if people want to they likely have a good reason. JS/DOM is open, let’s keep it that way.

At the April meeting it became clear that to move forward, ‘mode’ needed to be a feature, but vendors were struggling to reach agreement on whether this should default to ‘open’ or ‘closed’. As a result, all agreed that for V1 ‘mode’ would be a required parameter, and thus wouldn’t need a specified default.

element.createShadowRoot({ mode: 'open' });
element.createShadowRoot({ mode: 'closed' });

Shadow piercing combinators

A ‘piercing combinator’ is a special CSS ‘combinator’ that can target elements inside a shadow root from the outside world. An example is /deep/ later renamed to >>>:

.foo >>> div { color: red }

When Web Components were first specified it was thought that these were required, but after looking at how they were being used it seemed to only bring problems, making it too easy to break the style boundaries that make Web Components so appealing.


Style calculation can be incredibly fast inside a tightly scoped Shadow DOM if the engine doesn’t have to take into consideration any outside selectors or state. The very presence of piercing combinators forbids these kind of optimisations.


Dropping shadow piercing combinators doesn’t mean that users will never be able to customize the appearance of a component from the outside.

CSS custom-properties (variables)

In Firefox OS we’re using CSS Custom Properties to expose specific style properties that can be defined (or overridden) from the outside.

External (user):

x-foo { --x-foo-border-radius: 10px; }

Internal (author):

.internal-part { border-radius: var(--x-foo-border-radius, 0); }
Custom pseudo-elements

We have also seen interest expressed from several vendors in reintroducing the ability to define custom pseudo selectors that would expose given internal parts to be styled (similar to how we style parts of <input type=”range”> today).

<span class="hljs-tag">x-foo</span><span class="hljs-pseudo">::my-internal-part</span> <span class="hljs-rules">{ <span class="hljs-rule"><span class="hljs-attribute">... }</span></span></span>

This will likely be considered for a Shadow DOM V2 specification.

Mixins – @extend

There is proposed specification to bring SASS’s @extend behaviour to CSS. This would be a useful tool for component authors to allow users to provide a ‘bag’ of properties to apply to a specific internal part.

External (user):

<span class="hljs-class">.x-foo-part</span> <span class="hljs-rules">{
  <span class="hljs-rule"><span class="hljs-attribute">background-color</span>:<span class="hljs-value"> red</span></span>;
  <span class="hljs-rule"><span class="hljs-attribute">border-radius</span>:<span class="hljs-value"> <span class="hljs-number">4px</span></span></span>;
<span class="hljs-rule">}</span></span>

Internal (author):

<span class="hljs-class">.internal-part</span> <span class="hljs-rules">{
  <span class="hljs-rule">@<span class="hljs-attribute">extend .x-foo-part;

Multiple shadow roots

Why would I want more than one shadow root on the same element?, I hear you ask. The answer is: inheritance.

Let’s imagine I’m writing an <x-dialog> component. Within this component I write all the markup, styling, and interactions to give me an opening and closing dialog window.

  <h1>My title</h1>
  <p>Some details</p>

The shadow root pulls any user provided content into div.inner via the <content> insertion point.

<div class="outer">
  <div class="inner">

I also want to create <x-dialog-alert> that looks and behaves just like <x-dialog> but with a more restricted API, a bit like alert('foo').

var proto = Object.create(XDialog.prototype);

proto.createdCallback = function() {;
  this.shadowRoot.innerHTML = templateString;

document.registerElement('x-dialog-alert', { prototype: proto });

The new component will have its own shadow root, but it’s designed to work on top of the parent class’s shadow root. The <shadow> represents the ‘older’ shadow root and allows us to project content inside it.


Once you get your head round multiple shadow roots, they become a powerful concept. The downside is they bring a lot of complexity and introduce a lot of edge cases.

Inheritance without multiple shadows

Inheritance is still possible without multiple shadow roots, but it involves manually mutating the super class’s shadow root.

var proto = Object.create(XDialog.prototype);

proto.createdCallback = function() {;
  var inner = this.shadowRoot.querySelector('.inner');

  var h1 = document.createElement('h1');
  h1.textContent = 'Alert';
  inner.insertBefore(h1, inner.children[0]);

  var button = document.createElement('button');
  button.textContent = 'OK';


document.registerElement('x-dialog-alert', { prototype: proto });

The downsides of this approach are:

  1. Not as elegant.
  2. Your sub-component is dependent on the implementation details of the super-component.
  3. This wouldn’t be possible if the super component’s shadow root was ‘closed’, as this.shadowRoot would be undefined.

HTML Imports

HTML Imports provide a way to import all assets defined in one .html document, into the scope of another.

<span class="tag"><link</span> <span class="atn">rel</span><span class="pun">=</span><span class="atv">"import"</span> <span class="atn">href</span><span class="pun">=</span><span class="atv">"/path/to/imports/stuff.html"</span><span class="tag">></span>

As previously stated, Mozilla is not currently intending to implementing HTML Imports. This is in part because we’d like to see how ES6 modules pan out before shipping another way of importing external assets, and partly because we don’t feel they enable much that isn’t already possible.

We’ve been working with Web Components in Firefox OS for over a year and have found using existing module syntax (AMD or Common JS) to resolve a dependency tree, registering elements, loaded using a normal <script> tag seems to be enough to get stuff done.

HTML Imports do lend themselves well to a simpler/more declarative workflow, such as the older <element> and Polymer’s current registration syntax.

With this simplicity has come criticism from the community that Imports don’t offer enough control to be taken seriously as a dependency management solution.

Before the decision was made a few months ago, Mozilla had a working implementation behind a flag, but struggled through an incomplete specification.

What will happen to them?

Apple’s Isolated Custom Elements proposal makes use of an HTML Imports style approach to provide custom elements with their own document scope;: Perhaps there’s a future there.

At Mozilla we want to explore how importing custom element definitions can align with upcoming ES6 module APIs. We’d be prepared to implement if/when they appear to enable developers to do stuff they can’t already do.

To conclude

Web Components are a prime example of how difficult it is to get large features into the browser today. Every API added lives indefinitely and remains as an obstacle to the next.

Comparable to picking apart a huge knotted ball of string, adding a bit more, then tangling it back up again. This knot, our platform, grows ever larger and more complex.

Web Components have been in planning for over three years, but we’re optimistic the end is near. All major vendors are on board, enthusiastic, and investing significant time to help resolve the remaining issues.

Let’s get ready to componentize the web!


About Wilson Page

Front-end developer at Mozilla.

More articles by Wilson Page…


  1. Brian Di Palma

    “partly because we don’t feel they enable much that isn’t already possible.”

    What about all the other specs? What do they enable that isn’t already possible? What apps can be built that we can’t currently build? Maybe the reason Web Components haven’t taken off with Polymer is simply that web developers don’t need them?

    June 9th, 2015 at 13:54

    1. cody lindley

      Nods. Well said, and great questions. Web components, envisioned by polymer, offers better off the shelf HTML for those who want to be handed something like a element and just have it work. Or, a tab UI or carousel UI. Other than that, I’m unclear how these rather complicated offerings (besides custom elements) helps those of us who have been building components for web apps for years. The benefit appears to be minimal, except for two groups of people. One, people who want to build very simple static web pages and use HTML only to do it. Two, those who build these custom elements so the first group does not have to learn CSS and JS indepthly. Those two groups could gain something major from WC and maybe polymer. Which, honestly is great. No problem. Hope that works out. But, those of us who have been building componentized/modular web applications divided into regions of components/widgets for years see the benefits as trivial (in terms of adopting something like polymer). This technology doesn’t really doesn’t even seem to address the pain points associated with building web applications (acknowledge it helps with simple websites). What is really needed is a little sugar around custom elements and then see if we can actually squeeze some value out of declarative programming. For me personally, all this custom HTML stuff just leads to imperative code, as all declarative code leads to imperative to get the job done in an organized fashion. Now, I’ve spewed a lot of opinions. And all that really matters is what the community does. And as of yet, it’s done very little. Which speaks a lot louder than my thoughts.

      June 9th, 2015 at 15:43

      1. Paul van Dam

        Abstraction and encapsulation are good practices in any environment, why wouldn’t they be in HTML/CSS? Just because they simplify development, doesn’t mean that they are only useful for people with limited skill sets, as you are trying to portray.

        June 10th, 2015 at 02:28

        1. Wilson Page

          I was implying that an ecosystem of high-quality components would lower the barrier to web-application development for beginners. I’m assuming the authoring of components would largely be a task of intermediate/advanced community. We don’t have a good story for onboarding newcomers to the world of web-applications, and IMO the web is suffering as a result.

          The jump from static website to single-page app is a big one. Competent programmers may continue to author in their Reacts, Embers, Angulars, etc; but let’s give something to the folks in-between, whilst at the same time improve the platform we know and love :)

          June 10th, 2015 at 02:46

          1. Wes Johnston

            I think this is a little disingenuous to act like anyone using these isn’t “competent”. The web has been moving towards this for decades now (long before Alex Russel was involved). Its a good way to write things. Its the way every single other platform that exists operates. Frameworks have popped up over the years to polyfill bits of it, but this is later-end part of the extensible web manifesto. Build low level stuff so that devs can experiment (some of this falls in there since the Shadow DOM is really hard to fake), standardize the high level stuff once its matured.

            June 10th, 2015 at 13:12

      2. Wilson Page

        The good thing about the APIs that assemble the Web Components spec is that they are separated. If you don’t want to dive in head first with a Web Components based framework, you can still benefit from something like Shadow DOM.

        A global CSS namespace is not fun for developers or browsers to reason with. Shadow DOM can be used in select parts of you application to bring sanity to your code-base, and at the same time improve engine performance.

        Dive in completely, pick ‘n’ mix the APIs for you, or ignore them altogether; the choice is yours :)

        June 10th, 2015 at 03:01

    2. Wilson Page

      IMO the most exciting things about Web Components are:

      1. Interoperability: We can all use shared components independent of our UI framework of choice.
      2. Lowering the barrier to entry to creating high-quality web applications: Via declarative composition of off-the-shelf components.
      3. First stage to exposing browser internals: Frameworks can only go so far. Web Components are a step in the right direction to exposing APIs that with empower developers to create elements as good as built-in elements.

      Regarding ‘enabling what’s not already possible’:

      1. Shadow DOM: Gives us style isolation and markup encapsulation that frameworks can’t (or struggle to) implement.
      2. Custom Elements: (as above) Give a universally interoperable way to define a ‘component’.

      Reasons Web Components haven’t been largely adopted by community:

      Only Chrome supports a version of the APIs, and polyfills aren’t sensible for production. Compelling examples of Web Components in production by big players will give the go ahead for others to follow suit.

      June 10th, 2015 at 02:37

      1. Kevin Lozandier

        Can you elaborate that “Polyfills aren’t sensible for production”? People’s mileage may vary obviously with a gesture as accommodating & complex as polyfills tend to be.

        Accordingly, it seems rather a blanket statement overall to say “polyfills aren’t sensible for production”.

        Are you talking about specific polyfills from the past, existing polyfills today, or you really meant to say polyfills in general aren’t sensible for production?

        June 10th, 2015 at 11:16

        1. Kevin Lozandier

          *Can you elaborate what you mean when you say that “polyfills aren’t sensible for production”?

          June 10th, 2015 at 11:17

        2. Wilson Page

          In the back of my mind I was thinking about the Shadow DOM polyfill not being suitable for performant mobile web-applications.

          Having said that, the new ‘Shady DOM’ polyfill might [1] change things. I know Polymer team are working hard to on this problem.


          June 10th, 2015 at 11:27

  2. Joe

    Maybe the ball of string needs unknotting even more. Take react.js and JSX, this provides all the goals of componentization and works well with non-browser DOM targets. Personally, I would prefer to see XML literals added to ECMAScript, than WebComponents added to HTML.

    June 9th, 2015 at 14:39

    1. Angus

      Now that’s an idea. We could call it something like “ECMAScript for XML”, or perhaps e4x for short.

      July 6th, 2015 at 09:45

      1. Michael J. Ryan

        It’s funny you mention e4x, which I really loved for quite some time… used it alot in AS3 for flash communication to a VB.Net backend which also had XML literal support… worked very well. These days I use JSON everywhere.

        As it stood though, nobody outside of Mozilla had any interest in implementing e4x, the v8 team flatly dismissed the idea (despite a lot of stars on the issue). Of course, JSX as computed templates is a lot faster than using e4x, or for that matter ES6 template processing for rendering. I actually really like React, and while I think that something like Polymer is probably the future, I will probably be sticking to a react/flux workflow for some time on new projects.

        July 8th, 2015 at 20:46

  3. Ron Waldon

    Great post. Thanks for this.

    June 9th, 2015 at 17:22

  4. Kevin Lozandier

    Hi, Joe:

    There’s nothing stopping you to make web components with React.js or JSX—whether that’s with something like Maple.js or an abstraction such as Polymer.

    Both being abstractions, it expects you to be open to knowing the state of Web Components today that this article in facts helps you understand.

    Abstractions of Maple.js (which uses Polymer’s polyfills & has its own abstractions inspired by Polymer) & Polymer are an abstraction to help devs make Web Components TODAY not unlike what jQuery did to help devs ignore the DOM inconsistencies that plague the web many years ago.

    Some of these contentious bits are very arguable (i.e. the “general” thoughts of is being a “wart” seems rather dishonest), & in the meantime such Web Components abstractions allow you to use Web Components.

    Web Components provide long overdue capabilities to provide a more empathetic experience for consumers of components

    In 2015, a growing amount of developers or any consumers shouldn’t care you used ReactJSX or Angular to make a component, they shouldn’t have to memorize adding 2-3 different scripts to use it, or remember to copy-and-paste markup devoid of any semantic meaning (divitis) to have a component such as a carousel on their page.

    With HTML Imports, the Shadow DOM, Custom Elements & Templates. we significantly don’t have such problems.

    June 9th, 2015 at 17:45

  5. Sean Hogan

    Custom Browsing Contexts would help with addressing the Custom Elements issues, so naturally YAGNI, NIH, TLN.

    Custom Element registration could only occur during Custom Browsing Context installation, so the browser knows that if the browsing context is ready then the Custom Element definitions are ready.

    Custom Browsing Contexts could (must) intercept the DOM of fetched pages before they enter the view. This allows native HTML elements in a page to be replaced with custom elements **before** entering the view, which is more flexible than @is, e.g.
    replace all “
    with “

    Custom Browsing Contexts wouldn’t (couldn’t) support document.write() so that’s one more issue dealt with.

    June 9th, 2015 at 20:26

  6. SteveLee

    Great post and summary – thanks

    If polymer was simple a reference implementation for cross browser discussion that would be good – however it turned into a “everything is a component” framework. That’s another thing entirely and may explain some of the lack of adoption but browsers.

    I’m *really* concerned about ensuring progressive enhancement with web components. We need to provide fail safe behaviour for older browsers, Opera Mina or when errors occur. That’s hard to provide, especially with polyfils that rely on javascript. Even when all browser implement them in a way that supports PE it will depend on authors to write good components, rather like good accessibility of user provided content in CMS.

    June 10th, 2015 at 03:38

    1. Kevin Lozandier

      When it comes to progressive enhancement, as with all things that depends on JS, creating a “grade 1” version of your components without the bells & whistles of JS is the recommended route to take if you want to support a no-js situation.

      Scott Jehl has a remarkable book to aid developers to better understand this called *Responsible Responsive Design* .

      When it comes to things like a11y, Web Components can certainly be lazily implemented by developers who didn’t care to begin with. But that’s not surprising because progressive enhancement is often ignored because of developer inconvenience.

      Fortunately Polymer—a popular Web Component abstraction that recently is the first one that has made a serious effort towards being a Web Component lib you can actually use in production when it hit 1.0 last month—& the team behind it has made a serious effort of providing abstractions to make progressive enhancement easy *& promoting extensive a11y best practices.

      The following deck is a great resouce from Google IO regarding the latter:

      Furthermore, this link recent article provides the common way to PE Web Components today in addition to simple things you can do for the sake of performance with Polymer:

      Polymer 1.0 even shipped w/ elements, type extensions & behavior mixins entirely dedicated to these a11y.

      I’d say that these concerns are legitimate but are greatly alleviated increasingly with the efforts production-ready Web Component abstractions like Polymer are exerting towards solving this problem—particularly to give developers less excuses to ignore Web Components because of PE concerns.

      The deprecation of `is` complicates things a bit until standard bodies actually provide a superior alternative we can use today though.

      Overall, I hope this information helped you worry less about ensuring progressive enhancement w/ Web Components.

      June 15th, 2015 at 09:30

      1. SteveLee

        Kevin – thanks for such a thoughtful answer. Scott’s book is on my reading list – I should move it up. I’ve become convinced PE, responsive and accessibility are all part of the same puzzle.

        I wasn’t aware polymer supported PE, certainly extensive googling didn’t turn that up. I got the impression it assumed client side extension to basic web functionality to work (javascript, ajax etc). I’ve not explored 1.0 at all so thanks for the pointer. I had decided to use jQueryMobile as a good PE base and figure how to make components as I went.

        One reason I’d like to use web components is Mozilla AppMaker offers some real promise for supporting less technical creation of apps using bricks (an abstraction built on top of web comps that adds a messaging layer)

        A big part of supporting PE is providing a server render traditional web experience where the only ways to communicate back to the sever are links and forms. That requires extra work (and the apparent duplication that isomorphic claims to solve).

        However that extra effort is reasonable given the viewpoint that developing for the web means developing for a distributed unreliable system unlimited user preferences, devices and contexts. Not every one has a fast reliable connect with the latest version of evergreen browsers. That’s the “fun” of web dev.

        Thanks again :)

        June 16th, 2015 at 02:58

    2. Kevin Lozandier

      Can you clarify what you mean by Polymer turning into a “everything is a component” framework?

      At face value, it provides merely an abstraction to make Web Components; users or developers of components can decide how ambitious they want to be with this paradigm shift of how they compose their web applications.

      Some will merely recreate their existing components to be more reusuable components for use within their apps & now enabling it possible for it to be more easily usable outside their application by users.

      Others may take a more ambitious approach that composes elements to the extent it rivals the functionality of single-page JS frameworks.

      Another extreme is people compose with elements to the extent a single instance of their app can be instantiated through “.

      Nonetheless, the latter is an extreme example & may very well make sense for their use case. In any case, Polymer doesn’t require you or force you to thing of “everything is a component”.

      At best, they provided a great starter kit to demonstrate how well elements can work together with minimal non-component-related JS to provide a SPA experience.

      Considering even that wasn’t a “everything is a component” approach, I’d like to know examples of such a mindset. To better evaluate the pros & cons of that thinking.

      June 15th, 2015 at 10:20

      1. Kevin Lozandier

        Interesting this comments thread silently omits anything that looks like a tag; seem to be something that could be more elegantly handled.

        Let me re-word this passage:

        Another extreme is people compose with elements to the extent a single instance of their app can be instantiated through a single component inside the body element.

        June 15th, 2015 at 10:25

      2. SteveLee

        > Can you clarify what you mean by Polymer turning into a “everything is a component” framework?

        Kevin – from the point of view of gaining feedback on the W3C concept of web components having non UI/DOM Markup components like timers, or even the extreme you suggest of a tag for top level seems to be a bit of a distraction distraction.

        I do agree there should be choice in how you use the ability to add custom elements, for any reason. You could consider it to be a declarative DSL approach. Personally I’s stick to web components be UI elements and use other tools for architectural components

        June 16th, 2015 at 02:38

  7. Evan You

    I’m sad to see that the state of the specs is more in disagreement than I thought. But in case anyone is looking for a Web-Component-like development solution without the concerns of changing specs or polyfills, you should take a look at Vue.js:

    June 10th, 2015 at 07:48

  8. Cameron Spear

    Interoperability is a HUGE one. There’s a movement in the PHP community to create framework agnostic components that can easily be brought into any (modern) framework.

    This is true for JavaScript, too, but it is severely limited in what you can do without React/Angular/Ember behind it, and Web Components can help bridge that gap. We’ll start seeing more sophisticated components like Datepickers and Selects-with-search, etc, that have the potentional to play very well with “the big frameworks,” much more so than today.

    June 10th, 2015 at 19:00

    1. Evan You

      I’m not saying the spec is a bad idea. The problem is two-fold: people want component-oriented development experience, and interoperability. Currently web component is trying to solve both of them, but it turns out that it doesn’t solve the development experience issue very well on its own. IMO it is better if the spec focus on interoperability alone, and leave the dev experience part to frameworks.

      p.s. it is trivially easy to wrap a Vue component as a spec-compliant custom element:

      June 15th, 2015 at 10:14

  9. Thad

    What is ‘DMITRY’ supposed to stand for?

    June 11th, 2015 at 07:57

    1. Wilson Page

      It’s just the name of the person who proposed it.

      June 28th, 2015 at 21:20

  10. markg

    On multiple shadow roots, my understanding was that it was decided that multiple shadow roots would not be in v1 Web Components. Right?

    June 11th, 2015 at 21:08

    1. Wilson Page

      That’s correct.

      June 12th, 2015 at 00:38

  11. Luke

    I’m surprised custom-elements are being given so much consideration for high-performance web components. It seems to me adding all these features would make DOM even slower than it is already, while frameworks like Backbone let you abstract the model out of HTML DOM, and in the future everyone might possibly be using objects to generate canvas or platform instead of the traditional HTML dom objects. Considering frameworks like Backbone and templates, I thought the general trend was against having semantic markup and data within elements, but rather in a JS model/view/objects that render()s a view as needed (whether the view be a DOM element or part of canvas or svg or etc.)

    I’m curious if any of the web-component implementations here would ease up on restrictions of having one main UI- “thread” (aka JS that is not in a web-worker) that is able to access DOM. For example, Java Swing has SwingWorker, that is able to access the UI from a thread, and Android has something called runONUIThread(runnable…) Will there be something similar for DOM-heavy manipulations that may slow down the browser?

    June 11th, 2015 at 22:50

    1. Stephen Williams

      runOnUIThread() just puts the request in a queue to be run by the main UI thread when it is idle. SwingWorker is probably doing something similar, making sure that only one thread is accessing GUI resources at a time.

      July 9th, 2015 at 00:48

  12. Erik Isaksen

    this is the best article I’ve read on implementation to date. It is refreshing to read an article with such detail and a straightforward approach.

    June 12th, 2015 at 13:32

    1. Wilson Page

      Wow, thanks for your kind words Erik :)

      June 15th, 2015 at 00:45

    2. Kristian Gerardsson


      June 15th, 2015 at 05:10

  13. Neil Stansbury

    Oh for XBL2….

    Web Components is definitely something worth striving for – XBL in Firefox convinced me of the enormous value of the concept.

    For me the and tags are the only really critical ones for implementing HTML based components. would have been nice, but keyed directly on named elements.

    Rich HTML components with those tags and a simple JS shim works great.

    IMO a declarative approach is definitely the core priority, but the ShadowDOM seems to be an easy choice to defer to v2 for a fast & simple v1 spec.

    June 16th, 2015 at 06:16

  14. Neil Stansbury


    “For me the <template> and <content> tags are the only really critical ones for implementing HTML based components. <element> would have been nice, but keyed directly on named <template> elements.”

    PS. Why do I have to escape my own markup in a web blog about HTML markup??

    June 16th, 2015 at 06:21

  15. Chris Sanders

    Nice write up on the state of web components. Kevin thanks for the additional context and links.

    June 20th, 2015 at 12:03

  16. Jani Tarvainen

    Thank you for an excellent wrap-up of what’s going on. I wrote about the similarities of this to the Modular XHTML we were supposed to have:

    It’s also worth noting that there are plenty of good libraries such as React and Riot that already make this a feasible option:

    July 8th, 2015 at 19:59

  17. numan

    This article proves why web is losing to native apps – it has been 5 years and we still don’t have a consensus on web components.

    Embarrassing really. Can’t blame google for at least trying…

    July 8th, 2015 at 21:04

  18. Paul

    As someone who’s trying (and occasionally succeeding) to use Polymer at work, I read through all of this nodding. Yup, this explains why everything is so terrible, and nothing works the same in different browsers!

    Then I got to the end and read the conclusion, and I was surprised by your apparent acceptance. My interpretation of your conclusion is “1. Web Components are terrible right now, 2. the entire system we have for adding features to web browsers is terrible, 3. but we’re almost done with Web Components and then it’ll all be happyfun times!” How can you be optimistic about this?

    I have no problem with one party (like Google) taking the lead and just trying something, but if there’s one thing I learned from the SPDY/HTTP2 case, it’s that backwards compatibility is a boat anchor. HTTP/1.1 is one of the simplest protocols, and we couldn’t even improve that without rewriting it completely. I have zero hope that Web Components (or whatever comes next) are going to be at all usable. It’s a nightmare already, and I see no signs that anyone is doing anything to make it less complex. In fact, quite the opposite.

    I’m longing for the days when I worked with something simpler, like Autoconf. I never thought I’d say that. That’s how big and nasty Web Components are.

    As a developer, one of the things I’ve always loved about the web is that it’s fundamentally not a very complicated system. The people in charge are making it into a “huge knotted ball of string”, as you say. I can appreciate the problem that Web Components are trying to solve, but a huge knotted ball of string is not what I want.

    It’s a good thing that this has taken so long. I wish it would take even longer! If they could drop this much complexity on our platform any more quickly, we’d all be dead in a year.

    July 9th, 2015 at 08:49

Comments are closed for this article.