Accessibility and web innovation – a constant struggle

I just came back from a small “accessibility tour” giving a talk about accessibility and web innovation in Stockholm, Sweden at Funkas Tillgänglighetsdagar and then in Paris at the W3Cafe meetup.

In essence what I was musing about is that there is still a massive disconnect between accessibility and the development world. Accessibility is not seen as something that is cool and bleeding edge but as a necessary evil. If you ask about accessibility on developer mailing lists that juggle HTML5, Node.js, CSS3 and other cool technologies with ease you are very likely to hear that people are considering as an afterthought or make sure that “the interface degrades gracefully”.

When you ask the accessibility world about cool new technologies you are very likely to hear that they may be interesting in a few years but are not ready yet and certainly will never be accessible in a legal sense.

Having been positioned in between these two parties for a long time I am getting tired of this and I want the two fractions to move closer to each other.

Accessibility is part of everything we do – the physical world has become much better in the last decades because we care for the needs of people with disabilities. Lowered kerbs on sidewalks, OCR Scanning, subtitles and captions on movies and TV programs – these are all things invented for a disability need but we all now benefit from it. The same can and should happen in interface design and web development. If you think about it, the features that make a good mobile interface also cover a lot of needs of different disability groups. So why don’t we work together?

You can see the slides of the talk on Slideshare:

You can get the slide deck on Slideshare:

The audio of the talk is available at archive.org:

There are also extensive notes on the talk available on my blog.

About Chris Heilmann

Evangelist for HTML5 and open web. Let's fix this!

More articles by Chris Heilmann…


2 comments

  1. esj

    I could go on forever about accessibility problems because I have been disabled for the past 15 years thanks to many hours programming. Here are some the things I think are really hurting us with accessibility.

    We try to impose accessible interface on top of an existing interface. For example, every time I try to impose speech recognition on top of a GUI, it hurts. Literally. We need to do is separate the application from the user interface and allow each accessibility community to build their own interface based on their own needs. In my website, I’m starting a series of entries on how do you make it possible to manipulate non-language data using speech recognition.

    The second problem we have is that we try to impose the interface everywhere. I have no problems caring with the a smart enough box running enough speech recognition so I can do my work anywhere on any other machine.

    As a disabled person, I should not impose my need for speech recognition interface all the machines I might touch. Granted, Nuance might object to this because it cuts down their revenue but the reality is, I own a speech recognition user interface and I should be able to use any machine. An example of this would be an ATM. Why should ATM you speech recognition or text-to-speech? Why can’t I walk up with my smart phone or equivalent, link to it and then say what I need to say to get the job done. Or if I was blind, listen to what my UI for ATM tells me.

    disability interfaces must be modifiable by the end user. Every speech user interface I’ve ever been given is wrong and I end up reworking it if it’s possible or I don’t use it if it’s not. This is one of things that was frustrating me with the w3 speech recognition working group. Everyone is focused on making a fixed interface for speech and there is no place for a user created interface. Again, Nuance is a great example of how to do user interfaces wrong in that regard. They don’t give us the ability to Select-and-Say capability, they make it difficult for us to override their user interface, and much of the UI in the application is not accessible by speech recognition and cannot be made so.

    And one more. The big problem with user interfaces for disabled users is that they are created by tabs instead of us crips. You need to live like us for about a year and then you’ll start to get a clue about what works and what doesn’t work. Throw away your keyboard and live with speech recognition. Throw away your monitor and live with text-to-speech and that brings us back to our first point which is that you will hear the negative effects of imposing disability user interfaces on top of the existing interface.

    If you’re really smart, you will notice an economic argument here. The usability market is relatively small and it puts a large load on a development organization to do it right. There’s usually not sufficient payback unless there is a big governmental stick hanging over their head. They probably also lucky to do it right because they won’t invest in the knowledge or time.

    taking into account the previous two points I made in addition to the economic one and what you get is the axiom that says Make it possible for us to do it ourselves because you, the developer, shouldn’t even try. You’re only going to screw it up.

    Eat your own dog food.

    April 18th, 2011 at 23:21

  2. iManu

    Hi Chris,
    I often wonder why there are no tab “accessibility” in the browser preferences. Is this too would be discriminatory for the user to specify his disability, this information could be retrieved by the site, as is the case for the size of the screen for example ?
    (via http://translate.google.fr)
    ++

    April 22nd, 2011 at 05:13

Comments are closed for this article.