Skip to the content \ accessibility

« »

E-Access Bulletin – Issue 198, March 2018

e-Access Bulletin is produced with the support of Thomas Pocklington Trust.

To forward this free publication to others, please use our forward link underneath the end of the bulletin instead of your email application’s ‘forward’ button. Please encourage others to subscribe at our sign-up page.

For HTML and plain text versions of previous issues, please visit the e-Access Bulletin Archive. A plain text version of this issue will be available in the archive shortly. In the meantime, please email eaccessbulletin@gmail.com if you would like a plain text version of this issue emailed to you directly.

Please email questions, comments, article ideas and news leads to: eaccessbulletin@gmail.com.

Issue 198 contents

Section One: News

01: Boost UK industry through assistive tech, MPs and academics declare.

– Cross-parliamentary group talk tech and the UK Industrial Strategy.

02: 3D audio maps out the world in new app for visually impaired users.

– ‘Audio beacons’ and directional sound cues help users navigate cities.

03: Online toolkit aims to shake up Hollywood’s diversity issues.

– Changing how people with disabilities are portrayed on-screen.

Section Two: News in brief

04: Mapping Out – Wheelchair access routes in Google Maps;

05: Awards Season – Tech4Good Awards 2018 launched;

06: Embracing Human Diversity – accessible tech and human rights.

Section Three: The Inbox – Readers’ forum

07: Literary Endeavours – Braille transcription request.

Section Four: Report

08: Understanding screen-reader navigation – a tale of two rooms.
What is it really like to use a screen-reader? Ryan Jones from accessibility agency The Paciello Group sets out a series of scenarios to illustrate how screen-reading software perceives online content, and how this can help in creating clearer web pages for all.

[Contents ends]

Section One: News

01: Boost UK industry through assistive tech, MPs and academics declare

Driving economic growth through technology, the disability employment gap, and robotics in healthcare were some of the topics discussed at the latest meeting of the All-Party Parliamentary Group for Assistive Technology (APPGAT).

The aim of the meeting was to explore assistive technology in relation to the UK Industrial Strategy, unveiled by the Government in November of last year, which set out plans to boost the UK economy and industry. Key to the strategy is £725 million of planned investment from an Industrial Strategy Challenge Fund and four “grand challenges” that will affect global development: artificial intelligence; clean growth; ageing society; and the future of mobility.

Chaired by APPGAT co-chair and renowned Paralympian Lord Chris Holmes, the meeting began with a panel of MPs (including the Minister of State for Disabled People, Health and Work) and academics giving a short speech about assistive technology, before a Q&A session.

Hazel Harper from Innovate UK (the government’s technology and innovation body) explained that £98 million of the Industrial Strategy’s funding will go towards healthy ageing. She said: “Older people will be able to lead fuller lives, supported by ‘smart home’ technology, wearable devices and tech-enabled healthcare services. We need to develop products and services that are desirable long before they are needed.”

Bill Esterson MP, Shadow Business and International Trade Minister, talked about the value of exporting assistive technology. He said: “Assistive technology is one of the UK’s success stories, but we can do so much more. In a market where the World Health Organisation states that more than one billion people need one or more assistive products, there is a huge opportunity.”

Nigel Harris, chief executive of charity Designability, spoke about the need to push assistive tech into the mainstream: “The biggest barrier we’re facing is general public awareness of the benefits of assistive technology. There is a reluctance to fund it and perceptions are often poor. Until we can unlock this mainstream consumer market, we won’t fully realise the potential.”

Professor Catharine Holloway, co-director of the Global Disability Innovation Hub (GDI), talked about the GDI’s varied work with assistive tech in the future of mobility, and related opportunities in the Industrial Strategy. She said: “The UK has a pivotal role to play in scoping out what the market for assistive tech will be globally.”

David Frank from Microsoft talked about artificial intelligence (AI), stating that “Assistive technology is at its best when powered by AI, so [Microsoft] was pleased to see that AI was a theme in the Industrial Strategy.” He also pointed out that assistive tech isn’t just for those with accessibility needs. He said: “It’s also about a wider use of technology to empower all of us to be more productive at work or in our private lives, and that’s something that the Industrial Strategy gets at.”

Alex Burghart MP talked about assistive technology and employment, stating that government needs to increase its expertise and “get up to speed on what’s already out there.” In relation to the Industrial Strategy, Burghart asked: “We have four ‘grand challenges’ – why not have assistive technology running through all of them or alongside as an additional challenge?”

Sarah Newton, Minister of State for Disabled People, Health and Work, also stressed the need for assistive tech to be integrated into the Industrial Strategy. She said: “It shouldn’t be a separate thing, it should be embedded in everything we do, because it has the potential to transform so many people’s lives.”

Newton also highlighted the fact that “there is still a very stubborn disability employment gap,” before discussing the government’s Access to Work scheme, which funds equipment for people with health conditions in employment. Newton explained that “there’s no list which restricts how Access to Work can be used … I’d be happy if people contact me to ask, ‘Why isn’t this [technology] on there?’ I don’t think it’s a list with an end, because innovation is happening every day.”

Afterwards, questions from attendees covered a wide range of topics, including the potential of robotics to assist in healthcare. One question from the floor claimed that “Japan has announced that by 2020, four out of five care recipients will be supported by robotic technology,” before wondering if the UK could follow a similar ambition, in order to both healthcare and the UK tech industry.

Elsewhere, there was discussion on the need for effective provision of assistive technology in schools, with one attendee lamenting the lack of training available for teachers in classrooms where assistive technology is used.

Read more about the UK Industrial Strategy at GOV.UK.

Read more about the All-Party Parliamentary Group for Assistive Technology at the APPGAT website.

Comment on the ‘APPGAT and Industrial Strategy’ story now at e-Access Bulletin Live.

02: 3D audio maps out the world in new app for visually impaired users

Soundscapes and audio landmarks are two of the features in an innovative new navigation app designed for users with sight loss.

Designed by Microsoft, the Soundscape app maps out locations using 3D audio to help users build a picture of their surroundings, allowing them to locate specific places, such as restaurants, shops and specific addresses.

The 3D element provides various beneficial features, such as calling out points of interest and street names as the user passes them, either on foot or on public transport. Users are advised to wear a pair of stereo headphones when using the app.

A key feature of Soundscape is the ‘audio beacons’ that users can set at specific points. For example, a user might choose a café or known address to set as a beacon. A ‘3D audio cue’ generated by the app then tells users where this beacon is as they move around, with the sound appearing to come from the direction of the beacon. This helps the user create a mental picture of their surroundings and allows them to navigate towards familiar locations.

Speaking to e-Access Bulletin, Jarnail Chudge, an Innovation Architect at Microsoft, said that the app differs from traditional “turn-by-turn navigation” technologies. He said: “Soundscape is different, in that it helps people build a mental appreciation of their environment, empowering their own way-finding choices. It’s about giving a person choice rather than prescribing the direction they take.”

Microsoft point out that Soundscape is not meant to act as a sole form of navigation and is intended to complement other methods, such as a guide dog and cane. Chudge said: “People still need to develop the skills to travel around safely and take part in mobility training. Our work with Guide Dogs and other partners shows that when using Soundscape, people still make key decisions, such as when to cross a road, what route to take and where they want to travel.”

The app includes other more traditional mapping features, such as the ‘Around me’ and ‘Ahead of me’ functions, which describe points of interest surrounding and in front of where the app is being used.

Microsoft worked with Guide Dogs UK and LightHouse for the Blind and Visually Impaired to help develop the app and ensure that it was tested by visually impaired users.

Read more about the Soundscape app at the Microsoft Research website.

Comment on the ‘Microsoft Soundscape app’ story now at e-Access Bulletin Live.

03: Online toolkit aims to shake up Hollywood’s diversity issues

A free online resource has been launched to help the film and entertainment industries hire more people with disabilities.

Created by non-profit organisation RespectAbility, the Hollywood Disability Inclusion Toolkit features information on specific disabilities, details about apps that can benefit users different conditions, a documentary on ‘the evolution of disability in entertainment’, best practice examples of disability represented in TV and film, and links to organisations that can help with hiring people with disabilities.

Lauren Appelbaum, one of the toolkit’s authors and RespectAbility’s Communications Director, told e-Access Bulletin that people with disabilities had “consistently been overlooked” in Hollywood, despite consumers with disabilities representing a $1 billion segment of the market.

Appelbaum explained that, as well as helping to shape a more diverse entertainment industry, the toolkit aims to improve how people with disabilities are portrayed in film: “Storytellers often make glaring, yet easy to avoid, errors when covering our community,” she said. “A judge can be in a wheelchair, a barista can have Down’s syndrome and a teacher can have one arm – all without the focus being on the fact they have a disability. TV and film will be much more authentic when every crowd shot includes people with visible disabilities.”

The hope is that the toolkit will create wider hiring of people with disabilities throughout the entertainment industry, both on-screen and behind the scenes. “People with disabilities need to be in the writers’ room, behind the cameras and on set as actors in the same proportions that exist in life,” Appelbaum said.

The guide also features advice on using appropriate terminology and ‘etiquette’ when working with people with disabilities, as well as links to advocacy organisations (such as the National Arts & Disability Center) and specialist performance groups, such as the National Theatre of the Deaf, Actors for Autism and a Facebook page building a database of trained actors with disabilities looking for work.

Feedback on the toolkit so far has been “really positive,” Appelbaum said, with many people from the entertainment industry sharing the resource over social media: “I have received calls and emails from production companies asking for more information and opportunities to collaborate. The most common request is to review a script to ensure they are using the correct lexicon.”

Additional resources will continue to be added to the toolkit in future.

Read the Hollywood Disability Inclusion Toolkit in full at the RespectAbility website.

Comment on the ‘Hollywood Disability Inclusion Toolkit’ story at e-Access Bulletin Live.

[Section One ends]

Section Two: News in brief

04: Mapping Out

Wheelchair-accessible route information has been introduced to Google Maps for six cities: London, New York, Tokyo, Mexico City, Boston and Sydney. To find the new feature (available on both desktop and mobile versions of Google Maps), users input a destination and tap ‘directions’, then ‘public transport’. In the ‘Options’ menu under ‘Route options’, ‘Wheelchair accessible’ can then be selected. Wheelchair users and advocacy groups have welcomed the move, but stress that its effectiveness depends on accurate transport data from authorities.

Read more about wheelchair accessible maps at the Google Blog.

05: Awards Season

The 2018 AbilityNet Tech4Good Awards have launched and nominations for individuals or organisations are open. Now in its eighth year, the event celebrates innovative uses of digital technology to help others, with categories including accessibility, digital health and digital skills. Last year’s winners included the world’s first multi-line Braille e-reader and an online charity that connects parents of children with disabilities.

Read more about the awards, including entry details, at the Tech4Good website.

06: Embracing Human Diversity

An article on how accessible technology can help enable human rights has been published as part of a new ‘think piece’ series on technology and sustainable development, launched by the United Nations Research Institute for Social Development (UNRISD). The article, titled ‘Embracing Human Diversity: Policies and Enabling Factors for Accessible Technologies’, was written by Alejandro Moledo, from the European Disability Forum, and examines universal design and ICT accessibility within the EU.

Read the article on accessible technology and human rights at the UNRISD website.

[Section Two ends]

Notice: Thomas Pocklington Trust

E-Access Bulletin is brought to you with the kind support of Thomas Pocklington Trust, a national charity delivering positive change for people with sight loss. Find out more about their work at the Thomas Pocklington Trust website.

[Notice ends]

Section Three: The Inbox – Readers’ Forum

07: Literary Endeavours

Long-time Bulletin reader and regular correspondent Anthony Bernard gets in contact to ask about book transcription services:

“I have become interested in classical languages and have downloaded a book called ‘Greek Grammar in a Nutshell.’ I would like to have it transcribed into Braille. Does anyone know of any transcription services that can undertake the task at low cost?”

Send any tips for Anthony – plus any other questions, comments and responses – to:
eaccessbulletin@gmail.com.

[Section Three ends]

Notice: RNIB Connect Radio and e-Access Bulletin

E-Access Bulletin will be appearing on RNIB Connect Radio each month on The Early Edition programme. Hear more about the Bulletin and upcoming content appearing in each issue, as we discuss the latest accessible technology news and readers’ questions with Allan Russell.

Episodes will be available after broadcast as podcasts from the RNIB Connect Radio site. Listen to RNIB Connect Radio online or via television, smartphone or radio. Find more information about the Early Edition at the RNIB Connect Radio website.

[Notice ends]

Section Four: Report

08: Understanding screen-reader navigation – a tale of two rooms

By Ryan Jones.

[Editor’s note: This is an edited version of an article originally published by The Paciello Group, an international accessibility agency. The original post is linked to at the end of this article. Its author, Ryan Jones, is a project manager and trainer at The Paciello Group.]

For those of us who use screen-reading software such as JAWS, NVDA, or VoiceOver to access information on the web, the user experience can be quite different from those who can visually see the content. One of my goals throughout the many accessibility-focused training classes I have led has been to help others more accurately understand what it is like for someone using screen-reading software to navigate through a web page.

It is commonplace for most people to want to jump straight into the technical details, such as: what keystrokes do I press? Which screen-reader should I test with? What browser should I use?

While these are all important considerations, it is best to first step back and ask: “What is the experience like and how can I simulate that experience if I can see the screen?”

I would like to present several illustrations that have been effective for communicating answers to that question.

An open door

Let’s set the stage for our first illustration. Imagine you have just opened a door and are looking into a large conference room. In the centre of the room is a large conference table with ten chairs (five on each side of the table). Seated at the table are two men and two women. All four people are seated at the same side of the table (they are facing the door you are standing at). On the far side of the room (behind the people seated at the table) are three large windows that look out over a courtyard with benches, flowers, and small trees. On the right side of the room is a counter with a coffee pot and microwave sitting on it. The left side of the room has a large flat screen television mounted on the wall.

Assuming you are not already familiar with the layout of this room, what is the first thing you would do after opening the door? Some of you might visually scan the room from left to right. Some might scan from right to left. Others might first look at the table in the centre and then scan the perimeter of the room. No matter how you do it, most of you would in some way scan the room with your eyes to get a quick sense of the layout and contents. The scan might only take a couple of seconds and most of you won’t even realise you did it. You might then focus in on certain elements of interest, such as the people sitting at the table or the television.

A darkened room

Now, let’s reimagine the scene. This time when you open the door, the room is completely dark. No light is present and you can see absolutely nothing at first glance. But you have been given a small flashlight. When you switch it on, the light allows you to see a small area at a time. The area you can see is a small circle about two feet in diameter.

How would you now observe the contents of the room?

Some of you might move the light back and forth from left to right, starting at your feet and moving away from you. Some of you might start from the back of the room and move the light toward you, while others might randomly point the light at various places in the room with no pattern. As you move the light around the room you will need to build a mental map or image of what is in the room and how it is laid out. Building this mental map will take significantly longer than visually scanning the room, when all the lights were on. As you move the flashlight around, you will need to remember each thing you have seen and how they all relate together. If you forget where something is located, it will take more time to locate it.

For example, was the counter with the coffee pot on the right side of the room or was it in the back? How many people were sitting at the table? Was it four or five?

Answering these questions when you can see the entire room at once will take little effort, but answering them when you can only see a small area at a time will take much longer.

An analogous scenario

This second scenario is analogous to how a screen-reader user reviews a web page or smartphone app.

While keyboard commands or touch gestures can move the screen-reading software around the page, it is only possible to read one thing at a time. There is no way for the visually impaired user to get a quick (one-to-three second) overview of the page, similar to what someone who can see the screen might do.

Fortunately, if accessible page navigation techniques such as headings or regions are used, this can help the user focus in on certain areas of the page.

Stepping back to our dark room scenario from above, imagine there are now small red dots of light on key elements in the room, such as the table, counter, television and each person sitting at the table. You still have to use the flashlight to look around, but the red dots give you an idea of where the most important things might be located.

Another significant challenge that screen-reader users may face is dynamically changing content on a page. Returning to our light and dark room examples from above, pretend that one of the men gets up and moves to the other side of the table. There are now two women and one man on one side of the table, and one man on the other side. In the lit room example, you would most likely notice the movement as it happens.

Even if you weren’t looking directly at the man who moved, you would probably notice movement out of the corner of your eye and then turn to see what was happening. In our dark room scenario it would be very difficult to know that anything happened unless you happened to have the flashlight beam directly on the man who moved at the exact right time. It is more likely you would never know he moved until later, when you might move the light over the chair he vacated.

This, in effect, is what happens when page content changes but does not alert screen-reading software. The user may never know that something changed on the page unless they happen to move across the new information and realise that it is now different.

This challenge is best solved by ensuring that dynamic content uses techniques such as alerts or live regions, which cause the screen-reading software to announce the updated information to the user.

In our dark room scenario above, the man might verbally announce that he is moving from one side of the table to the other. Even if your light wasn’t on him, you would hear the announcement and better understand what is changing.

In one final illustration, let’s take the windows that look out to the courtyard. In the lighted room scenario you would be able to quickly see that the windows face a courtyard with benches, flowers and trees. But in the dark room example, even if you pointed the light at the windows, you would not be able to see what was outside. This illustrates visual elements, such as images that do not have a text label associated with them.

For example, screen-reading software can identify that an image is present on a page, but the only way it can communicate information about the image is through the alternative text label that can be assigned. Without the text label, the screen-reader user would have no idea what the image is showing. In our dark room example, a sign might be placed next to the windows with a description of what appears outside. When you locate the windows with your light, you would then be able to read the sign.

Better understanding

One of the best ways to understand the screen-reader user experience is to try it yourself. It is certainly advantageous to try using screen-reading software to navigate web page content. In addition, though, here is a simple exercise to simulate the scenarios above.

1. Print out a paper copy of a web page. I recommend one that is not too large, but contains a variety of elements, such as text, links, menus, etc.

2. Find a blank sheet of paper and make a small hole in the centre of it. The hole should be about the size of two or three words (around half an inch in diameter is usually sufficient).

3. Place the paper with the hole in it over the printout of the web page and try making sense of what is there. Slide the paper with the hole in it around, in order to read the contents of the web page print out below.

It will probably be very difficult and time-consuming to understand what is on the page, but this gives you a general idea of what it is like for a screen-reader user, especially if no page navigation techniques are used.

[Editor’s note: this is an edited version of an article from The Paciello Group. Read the original version at The Paciello Group’s website.]

Comment on the ‘Screen-reader navigation’ story at e-Access Bulletin Live.

[Section Four ends]

End Notes

How To Receive E-Access Bulletin

To subscribe or unsubscribe to this free monthly bulletin, visit our sign-up page. Please encourage others to sign-up!

Please send requests, comments and ideas for news or features to: eaccessbulletin@gmail.com.

To forward this free publication to others, please use our forward page link.

To view previous issues in text or HTML format, please visit the e-Access Bulletin Archive.

E-Access Bulletin may be reproduced as long as all parts, including this copyright notice, are included, and as long as people are always encouraged to subscribe with us individually by email. Please also inform the editor when you are reproducing our content. Sections of the bulletin may be quoted as long as they are clearly sourced as ‘taken from e-Access Bulletin, a free monthly email newsletter’, and the following website address is also cited:

www.headstar.com/eablive.

Staff

Editor: Tristan Parker

Technical Director: Jake Jellinek

Accessibility Advisor: Dr. Nick Freear

ISSN 1476-6337.

ISSUE 198 ends.

Comments

Comments are closed.