Skip to the content \ accessibility

« »

Artificial intelligence and accessibility at TechShare Pro 2017: the voice of things to come

Artificial intelligence (AI) was high on the agenda at the recent TechShare Pro 2017 conference in London. Hosted by AbilityNet and RNIB, the event explored a wide range of topics and ideas around digital accessibility and accessible technology, with a range of speakers discussing key ideas and developments.

One of the most popular themes of the event was AI and its potential benefits for accessibility. AI technologies have evolved at a tremendous pace over recent years and are now being used in everything from stock market trading to email management – but how can these technologies be utilised to help people with disabilities and impairments?

As delegates at the event heard, numerous innovative systems and projects using AI are being developed, which could have a huge impact in the near future, but as some speakers pointed out, AI is also being used to benefit people right now, in devices and apps that many people use every day. Here’s a summary of how AI was covered throughout the event.

Paul Smyth, Head of ICT Accessibility at Barclays, talked about the future of banking from an accessibility point of view and how AI is helping to enable this. Processes are becoming simpler, Smyth said, which helps users to complete tasks with fewer issues. As an example and to show how mobile banking is evolving, he then demonstrated a voice-operated payment system through an app.

“Rather than users having to learn increasingly complicated systems, there’s been a paradigm shift, where those systems now learn about users, which is very powerful,” Smyth said.

Smyth also talked about the increasing use of biometrics systems in banking, such as fingerprint recognition on apps, alongside voice and facial recognition to identify customers.

He said: “These things are simple and secure, we just need to get more comfortable in using them and the technology needs to evolve too.”

Smyth also talked about how technology is increasing personalisation within banking and how this can improve user experience: “The exciting bit in terms of AI is how it can power predictive insights,” he said, explaining that AI will increasingly be used to advise customers on how they can manage their money more effectively.

Following on from Smyth’s presentation was a panel discussion on ‘What does AI mean for the future of accessibility?’, hosted by Robin Christopherson, Head of Digital Inclusion at AbilityNet. Ellie Southwood, Chair of RNIB, spoke about how various devices and apps are helping make life easier for people with visual impairments. She explained how her Amazon Echo Dot ‘smart speaker’ (which responds to voice commands and questions from users) makes it simpler to find information, before talking about a new talking camera app, Seeing AI.

She said: “The app’s ability to recognise not just text, but also people and scenarios, is incredibly exciting, because in order to make decisions about the world, you have to interpret it – and if you can’t see, the interpretation you have is usually somebody else’s.”

Hector Minto, Technical Evangelist for Accessibility at Microsoft, then explained how he thinks AI will change accessibility. He said: “One of the challenges we have is to make accessibility matter to a wider audience and this is one of the roles that AI can fulfil. Rather than expecting somebody to provide information in an accessible format, hopefully AI will mean that all documentation becomes accessible in the future.”

Minto also spoke about Seeing AI and how he believes the app, and other similar technologies, can change interactions for people with disabilities. He said: “Seeing AI can recognise text, it can recognise people and emotion, and recognising something changes the social dynamic around disability.”

Explaining how someone who is blind could use the app to quickly identify who is around them, Minto said: “It empowers people to enter a room with their disability and take control, and that is a leveller.”

Following the discussion, an audience member asked the panel about potential difficulties in using AI-based voice technologies, particularly for people with cognitive impairments, and how these systems could be made more accessible.

Responding to the question, Ellie Southwood said that there are still “gaps” in the technology, but that using voice-operated AI systems to carry out complicated tasks “has to be the future.” She explained that these systems need to be customer-led and built around specific needs of users to ensure they work effectively.

Hector Minto agreed with this point, claiming that: “People with disabilities are going to lead the knowledge base on voice interaction.”

Later on in the day, Gareth Ford Williams, Head of Accessibility for Design and Engineering at the BBC, hosted a session on ‘The BBC Accessibility Journey and the Impact of Artificial Intelligence.’ Williams discussed how new and emerging technologies are being used by the BBC to make its operations more accessible for users.

He explained how AI can be used in ‘object-based broadcasting’ to learn about users’ viewing habits and tailor how programmes are delivered to them based on their requirements.

Speaking more generally about the potential of AI systems, Williams said: “Disenfranchised audiences are really going to benefit. For older audiences and others who struggle to use some assistive technologies, having something more humanistic as a mode of interaction is going to create a lot of opportunities for them.”

In the afternoon, Kiran Kaja, Technical Program Manager for Search Accessibility at Google, gave some insights on current and future use of AI. He pointed out that “AI is everywhere. It’s already being used quite widely – for example, in speech recognition, which is based on ‘neural networks’. This uses ‘machine learning’ to replicate how the human nervous system works.”

He explained that improvements to text-to-speech and predictive text systems are being made using AI, before discussing ‘computer vision’, in reference to the kind of image recognition that apps like Seeing AI use. Kaja said: “Being able to use computer vision to recognise objects and find the relationship between the objects – those things have really big potential for helping people with disabilities.”

Kaja also addressed a key issue that delegates had been discussing throughout the day: ease-of-use of AI systems, particularly for users with impairments. He said: “The great thing about all of this is that as a user, you shouldn’t need to worry about what AI is and what it can do – it should all just work for you. That’s the hallmark of great UX [user experience] design.”

More information about TechShare Pro 2017, including videos and presentation materials from the event, can be found at the AbilityNet website.

Comments

Post a comment

Comment spam protected by SpamBam