Skip to the content \ accessibility

« »

International Web Access Guidelines “Ineffective”, Academic Claims

Conforming to the international industry standard Web Content Accessibility Guidelines (WCAG) can be “ineffective” as a method of reducing problems encountered by blind and visually impaired web users, one IT academic has claimed.

The WCAG guidelines are created by the international World Wide Web Consortium, which oversees web standards. In his PhD thesis for the University of York, ‘Disabled people and the Web: User-based measurement of accessibility’, André Pimenta Freire – a specialist in human-computer interaction – writes that a large number of problems on website pages encountered by print-disabled computer users would not have been resolved by conformance to WCAG criteria.

“Achieving certain conformance levels with WCAG 1.0 and WCAG 2.0 can be very ineffective as a means to reduce the numbers of problems encountered by disabled users”, writes Freire. “The way the conformance requirements are structured do not seem to address the all-important concern of making websites that disabled users can use better and encountering fewer problems.”

The claims are based on a study carried out as part of his thesis, which involved task-based user evaluations of 16 websites from 64 users. Of these, 32 users were blind, 19 partially sighted and 13 dyslexic. Manual audits were used to determine website conformance to WCAG 1.0 and 2.0.

The two primary aims of the study were to characterise the problems that print-disabled computer users encounter through websites, and to investigate the relationship between user-defined accessibility issues and accessibility guidelines, with a focus on WCAG. 1.0 and 2.0.

The study demonstrated that conforming to the checkpoints and success criteria of WCAG does not necessarily, by itself, make a website accessible to print-disabled users, says Freire.

Speaking to E-Access Bulletin, Helen Petrie, Professor of Human Computer Interaction at the University of York and co-supervisor for Freire’s PhD and the senior academic who led the research for Freire’s thesis, said that although WCAG has made “a great effort” and highlighted important problems, websites with higher conformance to the guidelines are not easier to use for blind users. “There is no significant difference in the number of user problems on non-conformant sites and on conformant sites”, Petrie said.

This has problematic implications if legislation or policy about web accessibility were to be formed and based on WCAG conformance, said Petrie. A further problem is that “developers are struggling to understand [WCAG]”, Petrie said, meaning that direct user-testing with disabled users should be encouraged as a means of testing accessibility, she said.

Both Petrie and Freire stress the importance of involving disabled users directly in the design and evaluation process of building websites, and of moving away from “the technical conformance approach” of accessibility.

“The conclusions reinforced the importance of involving disabled users in the design and evaluation of websites as a key activity to improve web accessibility … The current status quo of proposing implementations based on expert opinion, or limited user studies, has not yielded solutions to many of the current problems print-disabled users encounter on the web”, writes Freire.

The thesis is available as a PDF from the links below:

Full link: http://etheses.whiterose.ac.uk/3873/

Short link: http://bit.ly/14y6M1G

Comments

  1. Kevin White | May 31st, 2013 | 11:52 pm

    This is an interesting study and it is important that those who are commissioning accessibility work are conscious that the checklists approach is not enough.

    That is not to say that I wouldn’t poke and prod at the work a little bit.

    Possibly the most poke is the lack of an non-disabled control group in order to allow for filtering of usability issues. When undertaking an audit I am always conscious of those issues that will cause problems for all users (usability) and those that will cause problems for users with disabilities (accessibility). This distinction is important both in terms of possible solutions and in terms of client response.

    Another tricky issue is software versions. This study adopted some slightly older versions of key tools that raise questions regarding how later tools might better cope with underlying code. Similarly, the strides in use and implementation of ARIA techniques responds to many of the identified issues.

    While I do understand the need for the laboratory control, in testing we would tend to go where the participant wanted. This ensure that the participant was in an environment that they were comfortable with and eliminates the possibility of environmental factors making it more difficult for users to engage with the sites under test.

    Overall thought I think this raises some interesting challenges. Many developers who I talk to and train are keen to understand what they need to do and this is important and can be addressed relatively well through WCAG 2.0 techniques. What is perhaps less apparent is why they need to do things in this way. There is nothing quite like the sound of the penny dropping when I let developers and designers listen to Jaws reading an image element with a missing alt attribute… much more effective than telling them why they need to put it in.

  2. GF Mueden | June 4th, 2013 | 2:07 pm

    Alas, my old eyes have me set to read in white on black and the PDF file is a blinding glare; can’t read it.

    I think it a big mistake to study eye readers (those who read with their eyes but not well) and ear readers (those who read with their ears using text to speech technology) as a single group. What bothers one group may be of no bother to the other E.g., the AFB website is unfriendly to those with limited visual fields. This does not bother the ear readers. What helps one group is often of no help to the other. E.g. allowing a change to a bold font is of no help to the ear readers.

    The guidelines do not call attention to these two groups and tell about the fixes and accommodations for each and yet the causes of the problems lie in those differences.

    This is another accessibility problem. Those who study it are
    themselves at fault.

  3. Graham Wilkins | June 7th, 2013 | 7:47 pm

    This study simply puts into academic study, something that is very apparent to many users. I support two totally blind young men, one of whom is very technically capable (Sam), and the other only uses technology because he has to! (Dave). Although Sam is very able, and knows all the workarounds using different browsers and screen readers for the best result, for practical purposes he does not find websites significantly more accessible to him.
    Even fully compliant websites often have significant faults, adn one worrying problem is that despite teh web designers being very committed to accessibility, it does come as an afterthought. One recent case, Dave’s bank has a very accessible online banking system, but twice in teh last 3 years they have upgraded it and major elements became inaccessible. The problems were corrected within weeks, but it demonstrates how accessibility comes late into the thinking, and clearly in this case (by their own admission) not tested by a user, which would have immediately highlighted the issues.
    Kevin White makes an important point in separating usability and accessibility. I personally find many websites very difficult to navigate usually due to a preponderance of filling pages with links and graphics which may or may not “do” something, and with key links made very small so they are lost in the clutter. This would, I imagine, be almost impossible for an “eye reader” (see G.F. Mueden, above) to navigate. Sometimes these sites are actually easier to navigate with a screen reader because of the option of opening a window which lists all the links in alphabetical order. However, often these are actually also the sites which are difficult to access with a screen reader.
    Dave actually finds the RNiB website quite difficult to navigate, and for years hasn’t been able to get his screenreader to “see” the PAY button on Amazon’s website.

  4. Ian Hamilton | June 19th, 2013 | 10:08 am

    Yes, I’m with Kevin on this one. The PDF is far too long to read but from what I’ve skimmed through they’re referencing things like unclear and confusing layouts, and confusing and disorienting navigation.

    These are not accessibility issues, and are not what WCAG is for. WCAG is to help avoid being unnecessarily excluded because of disability, and not to guarantee that people with disabilities can use a website.

    The studies described include asking people to find a certain piece of information. It coulaion, regardless of whether or not they are disabled.

    Having said that, although for me this study has little relevance beyond again pointing out that usability is important, I agree that testing with people with disabilities is essential.

    However and I do have to be absolutely clear about this: they’re living in a bit of a dream-land to suggest that people with disabilities should be involved throughout the process.

    Fair enough if you’re a government agency, but in the real world in most organisations you’re lucky to get one round of testing, so if you want to make a real difference across the web in general – including small studios and small budgets – it’s not the solution. Testing also means statistically insignificant small samples for which it’s impossible to represent even all broad categories of disability, let alone the differences and subtleties within them.

    So where I disagree is that testing with people with disabilities is the gold standard and something that should replace all other means, which are inadequate.

    Testing, guidelines and expert review each have their own major inadequacies, and their own major benefits. Do all three and the compensate for each other very nicely.

  5. Ian Hamilton | June 19th, 2013 | 10:10 am

    Third paragraph is missing a bit sorry, should read:

    “The studies described include asking people to find a certain piece of information. It could be just as hard for anyone at all to find that information, regardless of whether or not they are disabled”

  6. Grant Broome | June 25th, 2013 | 4:35 pm

    Very interested in the comments above and having read the study I’d also like to express concern over the recent publication highlighted in the last e-access bulletin.

    I work with other accessibility professionals at DIG Inclusion and having discussed the article, our fear, and we believe others in the community may agree on this, is that casual and perhaps unfamiliar readers will take the title of this article as factual rather than sensational and get no further than the first few pages of this lengthy and hard-to-digest study.

    In this detailed response we’ve called out a few fundamental issues with the study and suggested some other third-party resources about types of features that disabled people find useful.

    Our initial reaction to the headline itself claiming that WCAG is “ineffective” we found to be an over-reaching statement and one which, we fear may result in readers unfamiliar with web accessibility dismissing the guidelines altogether. Many of us who use the guidelines in the real world know how essential they are in educating web developers, unifying objectives for browser and assistive technology developers, and as a tool for measuring accessibility conformance metrics (setting aside the decision by the author to measure against WCAG 1.0 which has been redundant for the past 4 years).

    Very few people would go as far as to claim that the guidelines are perfect, but we hope that only a tiny number would actually consider them to be “ineffective”.

    **Claims of conformance
    As supporters of user-testing with disabled users, we would not promote WCAG as a be-all and end-all solution for accessibility: there are perhaps some improvements required to be made to the guidelines. We suggest these include a greater focus on people with multiple disabilities, particularly Deaf-blind users (who we cannot help but notice, are not included in this study). More importantly however, we would not advocate building a web product without a framework which includes the support of a robust and recognised set of guidelines such as WCAG 2.0. Approaching the development or testing of web products without this framework and instead relying solely on feedback from users is likely to result in an inconsistent and unreliable approach to developing a site which is not based on measurable metrics, but on individual opinion or preference.

    Some of the statements in the article we would suggest are troubling are as follows:

    “There is no significant difference in the number of user problems on non-conformant sites and on conformant sites”

    This report does not appear to differentiate between a site that is 100% conformant and one that is 99% conformant (which is effectively the same thing). Statements like this may encourage readers to conclude that websites that provide:

    ● alternatives to text
    ● caption and audio descriptions
    ● keyboard access
    ● semantic information
    ● robust code

    …(which are the backbone of WCAG guidelines), are no more accessible than those without!

    We are very surprised at the repeated claim that more highly conformant sites are not easier to use for disabled users:

    “websites with higher conformance to the guidelines are not easier to use for blind users. “

    This presumably includes criteria not specifically designed to aid blind people, but intended to improve accessibility for other groups of users. We noted that large groups of users such as mobility-impaired people were omitted from the study.

    To suggest that a site with 0% conformance to WCAG 2.0 is no easier to use than one with 100% conformance is a puzzling statement to make and it is difficult for us to imagine how a study of this depth could arrive at such an impossible conclusion.

    **Confusing accessibility with usability/performance issues and failure to credit WCAG 2.0

    As pointed out in the previous comments, many of the issues experienced by disabled users will be experienced by all users, so highlighting them as an exclusive issue for disabled users is misleading. As is not properly acknowledging that a problem is covered by WCAG guidelines. For example the table on page 142 includes multiple examples where a problem:

    ● is incorrectly highlighted as an issue only detectable by users with disabilities
    ● fails to acknowledge that an issue is covered by WCAG 1.0 or 2.0

    Examples include:

    ● navigation items do not help users find what they are seeking
    ● content not found on pages where expected by users
    ● system to slow
    ● no alternative to text in specific format
    ● too much information on a page
    ● broken link

    Among others that are clearly usability or performance issues, or are otherwise covered by WCAG 2.0 anyway e.g. “No alternative to text in specific format” – (covered by WCAG 2.0 1.1.1 – Non-Text content (Level A)) and “Broken link” – (covered by WCAG 2.0 2.4.4 Link Purpose in context (Level A)

    Throughout the document there are claims and instances that seem to suggest that the author may not have a proper understanding of the issues covered by WCAG 2.0 or perhaps has a bias against the guidelines for some reason.

    Note that the aforementioned “problem” i.e. “No alternative to text in specific format” relates to text being available only as a PDF document. This does not state whether the PDF is accessible – the inference is that all PDF content should be available in an alternative format (we presume that MS Word or plain text, even though a PDF, may be perfectly accessible as it is and is offered as the most appropriate format for many documents).

    As keen and devoted advocates of accessibility for all document formats, we would also like to point out that although the PDF is tagged to PDF/A specification, a very high number of significant accessibility issues are present. In a document of this nature, we were very surprised to find such significant and compounding errors.

    **Conclusion
    Whilst the study does provide some useful information about the types of issues that arise when visually impaired or dyslexic people use the web, the sheer length of the report combined with this very surprising conclusion and failure to credit WCAG 1.0 or 2.0 with identifying the majority of problems encountered by users, unfortunately makes this study a poor resource (in our opinion) and we would encourage readers to question and explore its claims in more detail.

    As an alternative, we would recommend taking a look at WebAIM Screen Reader User Survey and the WebAIM Survey of Users with Motor Disabilities for those looking for user studies on what features are useful to people with disabilities. We believe these surveys to be much simpler, digestible and more representative resources which can be used to learn which types of features are preferred by assistive technology users.

    Again we would like to state our advocacy of user testing with people with disabilities where it is appropriate and affordable. However, where possible we would encourage testing to be conducted in the user’s home environment with the user’s own hardware and software thereby encouraging methods that are most natural to them.

    Based in our own experience we would like to state that we believe the current WCAG guidelines to be “very effective” in helping to ensure that websites are accessible. We would not wish anyone to be discouraged from adopting them as part of their development and testing process or indeed as part of a robust accessibility policy.

  7. Detlev Fischer | July 4th, 2013 | 6:14 pm

    It looks like the research on which this thesis is based was aired a while ago in “Guidelines are Only Half of the Story: Accessibility Problems Encountered by Blind Users on the Web”, which was authored by Christopher Power, André Pimenta Freire, Helen Petrie, and David Swallow, all at the University of York, UK. It was presented at CHI 2012, May 5–10, 2012, Austin, Texas, USA and was published as part of the ACM digital library.

    I have looked at the earlier study in some detail and I think there are serious methodological flaws that put a big question mark on the outcome of this research: http://url.ie/i0tc

    Please note that I haven”t compared this thesis to the earlier study in any detail, so Freire’s thesis may differ somewhat from the earlier joint publication. However, as I mainly cover the research approach, not the wording of results, my hunch is that the same argument can be made in both cases.

  8. Helen Petrie | July 29th, 2013 | 12:12 am

    As authors of the research discussed in this article, we would like to reply to the interesting comments made. As this website does not support threading, we will post a comment about comments from each of the commentators, indicating which set of comments we are referring to.

    Andre Freire, Chris Power and Helen Petrie

  9. Helen Petrie | July 29th, 2013 | 12:15 am

    On the comments made by Kevin White

    Kevin said: Possibly the most poke is the lack of an non-disabled control group in order to allow for filtering of usability issues. When undertaking an audit I am always conscious of those issues that will cause problems for all users (usability) and those that will cause problems for users with disabilities (accessibility). This distinction is important both in terms of possible solutions and in terms of client response.

    Our response:
    The use of a control group depends on what research question you are asking. In this study we are interested in the overall experience of disabled users with websites, how that relates to the conformance of the websites to WCAG1/2 and to the number of problems they encountered when using the websites. In a previous study (Petrie and Kheir, 2007 available at http://www.cs.york.ac.uk/hci/publications/index.html) we did have a control group when we compared problems encountered by blind and sighted web users.

    The question of usability and accessibility is indeed an interesting one. It’s important to remember that disabled users with experience both accessibility and usability problems, and these problems will just be problems for those users. Their overall experience will undoubtedly be related to the incidence of both these kinds of problems, so it may not be helpful to segregate them when investigating disabled users web experience. In addition, it may well be that disabled users experience usability problems as more problematic than mainstream users do. This is an issue we explored in Petrie and Kheir and found some evidence for. So it is even more important to address usability problems to improve the web user experience for disabled users.

    Kevin said: Another tricky issue is software versions. This study adopted some slightly older versions of key tools that raise questions regarding how later tools might better cope with underlying code. Similarly, the strides in use and implementation of ARIA techniques responds to many of the identified issues.

    Our response:
    We tested with the versions that the participants actually use, to create realism in the results. This is what disabled people are actually experiencing now. The technologies that people use and the technologies used by websites (such as ARIA) are always going to be somewhat behind the optimal possible. If we had tested only with the very latest versions of software (operating system, browser and assistive technology) we would have either had to ask participants to work with technologies they are not familiar with (and very possibly don’t know how to use properly) or severely limited the number of disabled participants we could have involved in the studies (we would have needed to find participants who were competent in using the latest versions of the operating system, the browser and their assistive technology, not an easy thing). Then we would have needed to find websites that used only the latest and best technological solutions for accessibility. We actually tried for months to do that latter point, and we found it impossible to find such websites (even amongst websites that attempted to meet high accessibility standards). If we had been able to do that would have been able to show the best websites could do in terms of disabled user experience. But that would not have reflected the current experience of disabled web users. So we opted for the technologies that our disabled participants actually use and the technologies found on a range of websites. This reflects more accurately the typical user experience of the web for disabled users now.

    Please also note our answer to Question 2 in the FAQ about our papers at http://www.cs.york.ac.uk/hci/publications/index.html. In the paper we published last year the participant information was incomplete, we wrote that one of the participants used JAWS5, which he does at home, but he uses JAWS10 at work, and used JAWS10 in the evaluation. But the realization of this inaccuracy in our paper led to an interesting further analysis of how data (reported on the FAQ page, but not in Andre’s PhD thesis or in any of our published papers yet). We looked that the number of problems that participants who used JAWS encountered if they used JAWS 8/9, 10 or 11. One would predict that with more recent versions of JAWS, users would encounter fewer problems with websites, as JAWS has become more sophisticated at dealing with the issues on the web. This was not the case, there was no statistically significant decrease in the number of problems encountered by JAWS users with the newer versions of the screenreader. This is indeed disappointing (and may well not be the fault of JAWS, it may be the websites), but suggests that the versions of the software are not as important as one might expect.

    While ARIA does have promise of being effective, it would be nice to see studies with screen reader users showing that there is actually an improvement for them.

    Kevin said: While I do understand the need for the laboratory control, in testing we would tend to go where the participant wanted. This ensure that the participant was in an environment that they were comfortable with and eliminates the possibility of environmental factors making it more difficult for users to engage with the sites under test.

    Our response:
    With all due respect, this point seems to contradict the last point. If we are going to ask disabled users to use only the latest versions of software, we would have been creating a stressful and artificial situation for participants. In fact, our “lab” is a very relaxed situation, it is a “HomeLab”, with a living room and kitchen and we make considerable efforts to make participants feel at ease. They have coffee and biscuits before the evaluation starts and during breaks, we take them to lunch if it’s that time, we all chat with them etc. And conversely, going to participants home or workplace can be equally stressful. Many people do not have a private place at the workplace where we can do testing, so that is simply not possible. To go to people’s houses is potentially an invasion of their privacy that people are not comfortable with and we would not want to force on them. Sending a male researcher to the home of a female participant can be problematic as indeed can sending a female researcher to the home of a male participant (I have had female researchers who are very uncomfortable about this and this undoubtedly makes them stressed when doing the evaluations).

    In addition to this, a period of time was given to each participant to familiarize themselves with the computer and to configure the system to be as close to their personal settings at home before testing began.

  10. Chris Power | July 29th, 2013 | 12:21 am

    Points raised by GF Mueden
    GF Mueden: Alas, my old eyes have me set to read in white on black and the PDF file is a blinding glare; can’t read it.

    Response
    Sorry you had difficulty reading the PDF. In Acrobat Reader, you can adjust the colour of the text and of the background of the PDF document by going to Adobe Reader -> Preferences -> Accessibility -> Replace Document Colours. If that doesn’t work, and you are interested in reading it, we can try to generate a new copy with a colour scheme that meets your preferences.

    GF Mueden said: I think it a big mistake to study eye readers (those who read with their eyes but not well) and ear readers (those who read with their ears using text to speech technology) as a single group. What bothers one group may be of no bother to the other E.g., the AFB website is unfriendly to those with limited visual fields. This does not bother the ear readers. What helps one group is often of no help to the other. E.g. allowing a change to a bold font is of no help to the ear readers.

    Response:
    We did not study eye readers and ear readers (wonderful terms :) ) together as a single group. Indeed the whole point of the PhD was to study people who use speech (via a screenreader), those who use screen magnification/manipulation (via magnification software) and those who have difficulty reading due to dyslexia in depth and compare their problems.

    GF Muden: The guidelines do not call attention to these two groups and tell about the fixes and accommodations for each and yet the causes of the problems lie in those differences.

    Response:
    WCAG does draw attention to the problems of these groups and does explain fixes and accommodations for each in great detail. We are not responsible for the guidelines, but this seems a very unfair criticism of them.

  11. Helen Petrie | July 29th, 2013 | 12:22 am

    On the comments made by Graham Wilkins

    Graham Wilkins said: This study simply puts into academic study, something that is very apparent to many users.

    Our response:
    We are interested to hear of your experience. Many users do indeed still encounter large numbers of problems when surfing the web. Our study highlights how diverse that range of problems is and demonstrates the prevalence and severity of those problems as perceived by users.

  12. Chris Power | July 29th, 2013 | 12:40 am

    Points raised by Ian Hamilton

    Ian said: Yes, I’m with Kevin on this one. The PDF is far too long to read …

    Response
    The document referenced in the article in Andre’s PhD thesis. By definition this is a long and complex document, it has to exhaustively explain three – four years of work by the student. It is definitely not a document for light reading. We are quite bemused that this document was referenced in the article rather than the publications we have produced that have been available on our website (http://www.cs.york.ac.uk/hci/publications/index.html) for nearly a year, along with additional comments from us about the research. On our website we provide PDFs that are as fully accessible as we can make them (we have tested them extensively, if anyone finds anything not accessible in them, we would be happy to address those issues) and accessible HTML versions that may be easier for people using assistive technologies to navigate. Again, if anyone has problems with the HTML versions, please let us know and we will address.

    Ian said: However and I do have to be absolutely clear about this: they’re living in a bit of a dream-land to suggest that people with disabilities should be involved throughout the process.
    Fair enough if you’re a government agency, but in the real world in most organisations you’re lucky to get one round of testing, so if you want to make a real difference across the web in general – including small studios and small budgets – it’s not the solution. Testing also means statistically insignificant small samples for which it’s impossible to represent even all broad categories of disability, let alone the differences and subtleties within them.
    So where I disagree is that testing with people with disabilities is the gold standard and something that should replace all other means, which are inadequate.

    Response:
    What we are principally addressing in this work is the involvement of people with disabilities in the testing of the guidelines. If the guidelines do not ensure that barriers are removed for disabled web users, then they need improvement and further work. Note, in the dissertation, as with our previous article, we never argue that WCAG is a complete failure in this regard, simply that it needs more work, or an alternative that encompasses those problems currently not covered. We specifically say that the guidelines are ineffective at reducing the number of problems encountered. This likely means that as WCAG tests eliminate some problems, this then reveals other problems which are no less of a barrier to people with disabilities.

    We are not sure what other evidence you could produce to show that the guidelines are effective – they are meant to make websites accessible for people with disabilities and we are testing whether that is the case. We are testing with three disability groups and on 16 websites, and we would welcome others to contribute to doing this type of work. For example, Romen and Svanaes (2008, Proceeding of NordiCHI) in a smaller study with two websites, found that “only 27% of the identified website accessibility problems could have been identified through the use of WCAG” (p 535).

    However, in relation to the development of an individual website:
    We do advocate that a purely technical conformance approach, where guidelines are the only means by which a site is checked for accessibility, which we find in our work is still widely used at the moment, will not result in a barrier free website. Given that the guidelines do not currently remove all barriers to people with disabilities, we do recommend to developers of websites that have the resources to do so, to test their website with disabled users.

    We are well aware that this is a relatively expensive and time-consuming process and that not all developers can do this. This is even more reason to try to improve expert evaluation, via WCAG or other means, so the barriers to people with disabilities are better detected in the future.

    However, we do regard evaluation by people with disabilities as the “gold standard”. What else could be the gold standard?

  13. Helen Petrie | July 29th, 2013 | 12:42 am

    On the comments made by Grant Broome

    Grant says: I work with other accessibility professionals at DIG Inclusion and having discussed the article, our fear, and we believe others in the community may agree on this, is that casual and perhaps unfamiliar readers will take the title of this article as factual rather than sensational and get no further than the first few pages of this lengthy and hard-to-digest study.

    Our response:
    We would like to point out that we did not choose the title of the article, that was chosen by a journalist and we were not consulted. We published a paper in CHI last year entitled “Guidelines are only half the story” which is factual with respect to the data we collected – for 32 blind users using 16 different websites, only half the problems they encountered were accounted for by WCAG. As for the study being “lengthy and hard-to-digest”, this is a PhD thesis and represents over three years intensive work. I don’t think I’ve ever read a PhD thesis that isn’t lengthy and rather hard to digest.

    We are in the process of producing a series of papers, while still technical, present the work in a series of short papers. The first two of these are available on our website: http://www.cs.york.ac.uk/hci/publications/index.html

    Grant said: Our initial reaction to the headline itself claiming that WCAG is “ineffective” we found to be an over-reaching statement and one which, we fear may result in readers unfamiliar with web accessibility dismissing the guidelines altogether. Many of us who use the guidelines in the real world know how essential they are in educating web developers, unifying objectives for browser and assistive technology developers, and as a tool for measuring accessibility conformance metrics (setting aside the decision by the author to measure against WCAG 1.0 which has been redundant for the past 4 years).

    Our response:
    The broad statement of them being “ineffective” was the choice of the journalist who chose the headline of the article about “ineffective”, whereas our article title what “guidelines are only half the story”, a very different message.

    As stated above, we did not say that WCAG is “ineffective”, in fact if you read the whole article you will notice that Professor Petrie notes that “said that although WCAG has made “a great effort” and highlighted important problems …”. We also use the guidelines in the “real world”, providing consultancy to government departments and private companies. We also train web developers in the use and importance of the guidelines.

    As for the decision to measure against WCAG 1.0, it states a number of times in the article, and in the thesis, that we tested everything against both WCAG 1.0 and WCAG 2.0. This was important to do as it allows us the ability to both compare the work to previous studies undertaken on this topic (which were all done under WCAG 1.0) while also working with the newest standard where we would expect an improvement in the coverage of user problems.

    Grant said: As supporters of user-testing with disabled users, we would not promote WCAG as a be-all and end-all solution for accessibility: there are perhaps some improvements required to be made to the guidelines. We suggest these include a greater focus on people with multiple disabilities, particularly Deaf-blind users (who we cannot help but notice, are not included in this study). More importantly however, we would not advocate building a web product without a framework which includes the support of a robust and recognised set of guidelines such as WCAG 2.0.
    Approaching the development or testing of web products without this framework and instead relying solely on feedback from users is likely to result in an inconsistent and unreliable approach to developing a site which is not based on measurable metrics, but on individual opinion or preference.

    Our response:
    We are total agreement with the use of WCAG as a starting point. However, the idea that feedback from users of a website is “unreliable” strikes us as very odd. In both usability and accessibility testing, we always find that real users have problems that we did not anticipate, and that user testing is the “gold standard”.

    Grant said: Some of the statements in the article we would suggest are troubling are as follows:
    “There is no significant difference in the number of user problems on non-conformant sites and on conformant sites”

    We are very surprised at the repeated claim that more highly conformant sites are not easier to use for disabled users:
    “websites with higher conformance to the guidelines are not easier to use for blind users.“

    Our response:
    We were also surprised by these statements, but they are the conclusions from the careful analysis and statistical testing of a large corpus of data (as mentioned above, 32 blind users with 16 websites, over 1000 tasks completed by users). If other researchers have empirical data from users with different findings, we would be most interested in sharing and comparing results.

    Grant said: This presumably includes criteria not specifically designed to aid blind people, but intended to improve accessibility for other groups of users. We noted that large groups of users such as mobility-impaired people were omitted from the study.

    Our response:
    WCAG conformance is not in relation to particular disability groups, the levels of conformance (A, AA and AAA) cover all users with disabilities. So the inclusion of other criteria in the conformance levels is not relevant to this argument.
    As noted elsewhere, the study included partially sighted people and people with dyslexia. We could not study the problems encountered by people with all disabilities, so we are careful to restrict our conclusions to the groups for whom we have data.

    Grant said: As keen and devoted advocates of accessibility for all document formats, we would also like to point out that although the PDF is tagged to PDF/A specification, a very high number of significant accessibility issues are present. In a document of this nature, we were very surprised to find such significant and compounding errors.

    Our response:
    We have provided accessible PDF and HTML versions of our papers at: http://www.cs.york.ac.uk/hci/publications/index.html
    If anyone has difficulty with the accessibility of the documents we control, we would like to hear from them and we will address those issues.

  14. Helen Petrie | July 29th, 2013 | 12:47 am

    Comments made by Detlev Fischer

    Detlev said: It looks like the research on which this thesis is based was aired a while ago in “Guidelines are Only Half of the Story: Accessibility Problems Encountered by Blind Users on the Web”, which was authored by Christopher Power, André Pimenta Freire, Helen Petrie, and David Swallow, all at the University of York, UK. It was presented at CHI 2012, May 5–10, 2012, Austin, Texas, USA and was published as part of the ACM digital library.

    Our response:
    Andre’s Phd was indeed the source of the paper presented at CHI 2012. Andre’s Phd investigated the problems encountered by three groups of disabled web users: blind people using screenreaders, partially sighted people using screen magnification software and dyslexic people who typically do not use any assistive technology. The CHI 2012 paper concentrated on the problems encountered by screenreader users, we have also published a preliminary write-up of the problems encountered by dyslexic readers (presented at the INTERACT 2011 Conference, the paper is available at http://www.cs.york.ac.uk/hci/publications/index.html. We will produce a paper with a more detailed analysis of the data from dyslexic web users and one on the problems of partially sighted web users.

    Detlev said: I have looked at the earlier study in some detail and I think there are serious methodological flaws that put a big question mark on the outcome of this research: http://url.ie/i0tc

    Our response:
    Andre’s Phd was successfully passed by the Department of Computer Science at the University of York in the UK, one of the top Computer Science Departments in the country at a very highly rated university. There were two examiners, the external examiner is one of the most highly regarded experts on web accessibility in the world and the internal examiner is an international renowned HCI expert. Both of these examiners praised the thesis and did not find “serious methodological flaws”. A thesis with serious methodological flaws would never be accepted in our department.
    The paper we published last year in which you allegedly found “serious methodological flaws” was published and presented at the most prestigious human-computer interaction conference in the world, the Association of Computing Machinery (ACM) Conference on Human Factors in Computing Systems (see http://en.wikipedia.org/wiki/Conference_on_Human_Factors_in_Computing_Systems). The acceptance rate for this conference is around 20%, so only about 1 in 5 papers submitted are accepted for presentation at the conference. Between 3 and 5 experts in the relevant field review each paper and these reviews are then moderated by an Associate Chair of the conference and discussed in a committee of experts. Again, it seems extremely unlikely that a paper with “serious methodological flaws” would be accepted at this conference.
    We actually addressed the methodological issues that you raised about our CHI paper in our FAQ on our publications page quite some time ago, see http://www.cs.york.ac.uk/hci/publications/index.html

    The points you raised were:
    (1) Only blind users were included in the study
    Our response: Not true. Very clear from our FAQ and Andre’s thesis.

    (2) Some participants were used to very dated assistive technology (e.g., JAWS 5.0) but were asked to use recent versions such as JAWS 10.0 in the study

    Our response: Not true. We have clearly stated this was a confusion on our part. But the person who uses JAW5 at home uses JAWS10 at work, so is completely competent in using JAWS10. And indeed, your comment led us to an interesting new analysis that shows that the JAWS version is not as critical as one might expect.

    (3) Manual site audits of the WCAG conformance of the sites included covered just the home page, user tasks presumably many other pages

    Our response: Our research (for the Disability Rights Commission) has already shown that there is a very, very high correlation between the WCAG conformance of the homepage and of other pages, not just at the level of conformance (A, AA, AAA) but also at the actual number of WCAG checkpoints violated and the total number of violations. So it is a waste of time to test all pages.

    There is more information on all these points on our FAQ.

Post a comment

Comment spam protected by SpamBam