Researchers from multiple universities and NVIDIA developed A11yShape, a system enabling visually impaired developers to create, edit, and verify 3D models.
A11yShape was inspired by the struggles of a blind graduate student and aims to make 3D modeling more accessible for blind and partially-sighted users.
The system uses OpenAI’s GPT-4o and OpenSCAD, leveraging screen readers and Braille displays to facilitate interaction through text-based processes.
A11yShape converts 3D models into token streams, presenting user-interpretable descriptions to allow iterative development without visual input.
The team’s research is published as open access in the Proceedings of the 27th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’25).
Read the full A11yShape Brings LLMs to Bear on Assisting Blind, Visually Impaired Developers with 3D Modeling Article on Hackster.io

Comments