The Impact of AI in Advancing Accessibility for Learners with Disabilities

min read

AI technology tools hold remarkable promise for providing more accessible, equitable, and inclusive learning experiences for students with disabilities.

Help tools for users with hearing or sight problems
Credit: VectorMine / Shutterstock.com © 2024

The impact of artificial intelligence (AI) on various areas of higher education—including learning design, academic integrity, and assessment—is routinely debated and discussed. Arguably, one area that is not explored as critically or thoroughly is the impact of AI on digital accessibility, inclusion, and equity. Several exciting technological developments in this space offer promise and optimism inside and outside of higher education. These advancements afford people with disabilities more equitable access to the same educational services and resources offered to students without disabilities. Ironically, students with disabilities—who stand to gain the most from emerging AI tools and resources—are often the most disadvantaged or least able to use them.Footnote1 More concerning still is that few people in the disabled community have been asked to advise on the development of these products. A 2023 survey of assistive technology users found that fewer than 7 percent of respondents with disabilities believe there is adequate representation of their community in the development of AI products—although 87 percent would be willing to provide end-user feedback to developers.Footnote2

As an accessibility advocate with a hearing impairment, I have been keenly interested in AI advancements that have the potential to provide people with disabilities more equitable access to educational content—especially since the release of ChatGPT and other large language models (LLMs). However, the use of AI in educational technology and instruction is hardly new. The Programmed Logic for Automatic Teaching Operations (PLATO) system was developed at the University of Illinois in the 1960s, and Jaime Carbonell developed SCHOLAR at Stanford University in the 1970s. Both computer learning tools are early forms of AI.Footnote3 More contemporary edtech developments were introduced in the 2000s, such as ALEKS, Newton, and intelligent tutoring systems (ITS)—all popular, widely used student-facing AI courseware platforms. AI-generated automatic captions for web conferencing also became widely available in the early to mid-2010s, along with significant advancements in automatic speech recognition (ASR) technology. In 2009, Google introduced automatic captioning for YouTube videos. This groundbreaking development demonstrated the potential for using speech recognition to generate captions in real time or from recorded content.Footnote4 Although this technology was initially panned for its inaccuracy and high error rate, it gradually influenced similar capabilities in other technologies, including web conferencing platforms.

The relatively recent release of LLMs has ushered in a surge in AI product development in this space. A few recently introduced edtech products and services are described below. While this list is not exhaustive, it captures capabilities that were thought impossible only a short time ago.

Automated Image Descriptions

For screen readers to accurately decipher the content of pictures, images, and diagrams, content authors must add descriptions, labels, or alt text (also referred to as alternative text). With the advent of LLMs, AI technologies can auto-generate these descriptions. Several tools that generate image descriptions are in early development and release. For example, Arizona State University recently launched a new AI-image description utility that uses ChatGPT-4o to analyze user-uploaded images and produce robust alternative text descriptions. This tool can also analyze and extract embedded text (i.e., text that is not machine-readable) from slides and images.

Accessibility advocate and developer Cameron Cundiff created a Non-Visual Desktop Access (NVDA) add-in that provides semantically rich image descriptions of any website, software product, or desktop icon. This tool uses the vision capabilities in the Google Gemini API to analyze and generate robust image descriptions that can be read back through the speech synthesizer in NVDA.

Astica.ai used its Vision API technology to develop an image description tool that generates captioned images, brand identification, and automatic content moderation. Users can upload complex images, and Astica.ai will automatically scan and identify elements and generate detailed alt-text descriptions.

Researchers at MIT developed VisText to help people generate captions and descriptions of complex charts and graphs—among the most challenging image types for assistive technology to describe. This tool is particularly useful for describing complex patterns and trends within chart data.Footnote5

Darren DeFrain, an English professor at Wichita State University, led a team of developers that created Vizling, a mobile device app designed to make multimodal media, such as comics, maps, graphic novels, and art, accessible for blind and low-vision readers. Screen-reading products have difficulty parsing comics and graphic novels because their panel-based layout and use of speech balloons do not conform to predictable patterns.

Audio Description Generation

U.K.-based WPP is working with Microsoft to develop advanced audio description tools built on GPT4. This technology generates enhanced audio descriptions of user-uploaded videos and images. The company is also working collaboratively with the Rijksmuseum, the national museum of the Netherlands, to provide enhanced audio descriptions for its collection of nearly one million works of art, opening the door to libraries with extensive special collections.Footnote6 This tool is expected to be available soon.

Support for Cognitive and Physical Disabilities

Microsoft UK recently introduced a series of vignettes showcasing how AI technology is being used to support people with various cognitive and physical disabilities. Nearly all of the featured use cases have direct applications in higher education.

In 2023, Microsoft partnered with OpenAI to develop Be My AI, a digital visual assistant within the Be My Eyes app. Be My AI is powered by OpenAI's Vision API, which contains a dynamic new image-to-text generator. Be My AI users can send images and ask questions via the Be My AI app. An AI-powered virtual volunteer answers any questions about the images and provides instantaneous visual assistance for a variety of tasks.Footnote7 This technology provides enhanced opportunities for learners who are blind or have low vision.

Goodwin University in Connecticut is experimenting with AI products to support neurodivergent students. For example, the university recommends GitMind for assistive notetaking, mind mapping, and brainstorming.Footnote8

The University of Central Florida, in conjunction with United Cerebral Palsy (UCP) of Central Florida, has developed "ZB"—an AI-driven socially assistive robot—as part of Project RAISE.Footnote9 ZB is designed to help students with disabilities develop and improve their social skills and can even teach them how to code. "He hangs out with students in their classes, affirming them with positive messages," according to a Kansas City PBS news story.Footnote10

Inclusive Design Support

GPT Accessibility CoPilot, developed by Joe Devon, co-founder of Global Accessibility Awareness Day (GAAD) and chair of the GAAD Foundation, is a tool that helps content developers and instructional designers by analyzing the code structure in web and content pages and matching it against WCAG 2.2 Success Criteria. If the code does not meet the criteria, Accessibility CoPilot provides suggestions for improving it.

Ask Microsoft Accessibility is a free tool that can be used by faculty and students to develop accessible course content. Users can type a question such as, "How do I make Excel files more inclusive?" and the AI assistant provides several solutions in near real time. This product is in early release.

Procter & Gamble is using an AI-assisted QR Code technology called Navilens to assist people who are blind or have low vision. Navilens can be used to locate products among dense shelving and read the instructions for use or list of ingredients.Footnote11 This technology is also available to venues that require wayfinding and sign-reading services. Navilens is free to download and use, and the company is currently offering its proprietary codes to schools. The company has partnered with Microsoft to provide greater autonomy to users of a specialized headset developed by ARxVision.Footnote12

Coding and Development Support

GitHub recently launched Copilot, a code completion tool developed in conjunction with Microsoft and OpenAI. GitHub Copilot Chat is a complementary chat interface that can help programmers learn about accessibility and improve the accessibility of their code.Footnote13

Accessibility and training company Deque announced the release of Axe DevTools AI, a suite of tools that can be used by web developers to test and correct the digital accessibility of web content and other website elements. For example, Colorvision (just one of the tools in the suite) automatically checks for incompatible color contrast. At the Axe-Con 2024 conference, Gregg Vanderheiden, professor emeritus at the University of Maryland, predicted that AI-powered tools would provide near ubiquity to all digital products and that these products would adapt to the user's accessibility preferences in real time.Footnote14

Translations, Captions, Lip Reading, and Speech Recognition

LLMs have made possible a variety of new translation, caption, lip reading, and speech recognition tools. For example, Microsoft Copilot+ PCs include live translation in nearly every language. Previously, this technology was available only in certain productivity products, such as PowerPoint; however, it is now poised for wide availability across various Microsoft productivity products.Footnote15

SRAVI (Speech Recognition App for the Voice Impaired) is an AI-powered lip-reading app developed by Fabian Campbell-West, co-founder and CTO of Liopa, a software development company in Belfast, Ireland. SRAVI was initially developed to help ICU and critical care patients who have lost the ability to speak communicate more effectively with their families and health care providers. In 2023, the app was being tested on patients who had undergone a total laryngectomy procedure.Footnote16 Liopa is a spin-out from Queen's University Belfast and its Centre for Security Technologies. Although the company was dissolved earlier this year, the SRAVI app is still available for download.Footnote17

Ava is a mobile app that allows people who are deaf or hard of hearing to take part in group conversations in English, Dutch, French, German, Italian, or Spanish. The app provides limited conversational support for twenty spoken languages. People who are engaged in a conversation can open Ava on their phones and then speak as the app listens. Ava converts spoken words into text in nearly real time, rendering each speaker's words in a different color to help those who need to read along follow the chat.Footnote18

The University of Illinois is working with Microsoft, Google, Amazon, and several nonprofit organizations on the Speech Accessibility Project, an interdisciplinary initiative "to make voice recognition technology useful for people with a range of speech patterns and disabilities." University researchers are recording individuals who have Parkinson's disease, Down syndrome, cerebral palsy, stroke, and amyotrophic lateral sclerosis. Those recordings are used to train an AI automatic speech recognition tool. According to the Speech Accessibility Project website, "Before using recordings from the Speech Accessibility Project, the tool misunderstood speech 20 percent of the time. With data from the speech accessibility project, this decreased to 12 percent."Footnote19

Conclusion

These are just a few of the exciting developments emerging from the intersection of AI and accessibility. Many people in the higher education community are rightfully cautious about the use of AI. However, numerous products and services that promise more equity and inclusion for people with disabilities are currently available or in development.

Notes

  1. Archy de Berker, "AI for Accessible Design," Medium (website), November 28, 2017. Jump back to footnote 1 in the text.
  2. "Insights: AI and Accessibility," Fable (website), Accessed June 2024. Jump back to footnote 2 in the text.
  3. University of Illinois Urbana-Champaign Grainger College of Engineering, "PLATO and the Genesis of Computer Engineering," Limitless Magazine, fall 2020; Wenting Ma, Olusola O. Adesope, John C. Nesbit, and Qing Liu, "Intelligent Tutoring Systems and Learning Outcomes: A Meta-Analysis," Journal of Educational Psychology 106 no. 4 (2014): 901–918. Jump back to footnote 3 in the text.
  4. "Unusual Beginnings on Google Video; YouTube CC History pt.1," Datahorde (blog), August 28, 2020. Jump back to footnote 4 in the text.
  5. Benny J. Tang, Angie Boggust, and Arvind Satyanaranan, "VisText: A Benchmark for Semantically Rich Chart Captioning," (presentation, 61st Annual Meeting of the Association for Computational Linguistics, Toronto, Canada, 2023). Jump back to footnote 5 in the text.
  6. Satya Nadella, "Closing Video: Satya Nadella at Microsoft Build 2024," Microsoft, May 21, 2024, YouTube video, 2:20. Jump back to footnote 6 in the text.
  7. "Microsoft Joins Be My Eyes' Be My AI Beta to Take Accessibility of its Products to the Next Level," Be My Eyes (blog), Microsoft, accessed June 2024. Jump back to footnote 7 in the text.
  8. "AI and Accessibility: How Goodwin Uses Artificial Intelligence to Support Neurodiversity," Goodwin University ENews, April 26, 2024. Jump back to footnote 8 in the text.
  9. Nicole Dudenhoefer, "Real Results," Pegasus: The Magazine of The University of Central Florida, Spring 2024. Jump back to footnote 9 in the text.
  10. Cuyler Dunn, "Technology Is Reshaping Education. Are Schools Ready?" Flatland, June 19, 2024. Jump back to footnote 10 in the text.
  11. Sam Latif, "Sam Latif P&G NaviLens Accessible Code," September 4, 2023, YouTube video, 2:22. Jump back to footnote 11 in the text.
  12. Jordana Joy, "ARxVision, Seeing AI, and Navilens Announce Partnership for Headset Programme," Ophthalmology Times Europe, April 1, 2024. Jump back to footnote 12 in the text.
  13. Ed Summers and Jesse Dugas, "Prompting GitHub CoPilot Chat to Become Your Personal AI Assistant for Accessibility," GitHub (blog), October 9, 2023. Jump back to footnote 13 in the text.
  14. Gregg Vanderheiden, "How AI Will Help Us Re-Invent Accessibility, Lower Industry Load, and Cover More Disabilities," (presentation, Axe-Con 2024, virtual conference, February 2024). Jump back to footnote 14 in the text.
  15. Yusuf Medhi, "Introducing Copilot+ PCs," Official Microsoft Blog, Microsoft, May 20, 2024. Jump back to footnote 15 in the text.
  16. "Liopa Wins Healthcare Contract with New AI Lip-Reading Technology," Institute of Electronics, Communications & Information Technology, Queen's University Belfast, October 18, 2021; Leigh McGowran, "Liopa's Lip-Reading App Is Being Tested on US Patients," Silicon Republic, January 11, 2023. Jump back to footnote 16 in the text.
  17. Ryan McAleer, "Liquidator Appointed to Belfast Lip Reading Technology Firm Liopa," The Irish News, May 23, 2024. Jump back to footnote 17 in the text.
  18. Jackie Snow, "How People With Disabilities Are Using AI to Improve Their Lives," NOVA (website), January 30, 2019. Jump back to footnote 18 in the text.
  19. "Speech Accessibility Project," Beckman Institute for Advanced Science and Technology Speech Accessibility Project, University of Illinois Urbana-Champaign, accessed June 2024. Jump back to footnote 19 in the text.

Rob Gibson is Dean at WSU Tech.

© 2024 Rob Gibson. The content of this work is licensed under a Creative Commons BY-ND 4.0 International License.