University of Texas at Dallas researchers have created a novel tool, A11yShape, that’s breaking new ground. This new tool allows for more advanced 3D modeling which creates better models for people who are visually impaired. The exciting new platform provides an intuitive, user-friendly web interface. To do this, it brings together a code editor panel, AI assistance panel, and model panel, all combining to provide blind programmers the tools they need to interact with 3D design on their own.
Liang He, an assistant professor of computer science, ignited the project during a discussion with a classmate with low vision. They were worried about accessing 3D modeling education. First, he recognized the need for a comprehensive unified solution. This possible solution would help visually impaired users better understand the intricacies of 3D design. A11yShape tackles this challenge, providing users with the ability to visualize how changes in code will affect their design outcomes.
Features of A11yShape
A11yShape’s interface consists of three primary panels: the code editor, AI assistance, and model display. The code editor lets users write and modify code directly, while the AI assistance panel offers contextual feedback. This is a great feature to have. It offers reimagined descriptions, purpose-built for blind and low-vision users, empowering them to be directly involved in shaping the design process.
The model panel provides a tree based hierarchy and outputs the newly formed model which is then synced with the other two panels. Once you choose a line of code or a part of a model component, A11yShape goes to work. We’re especially excited about the matching elements throughout all three panels! This 360-degree integration provides users a real-time and accurate view of how their coding connects to the all-important visual aspect of their designs.
“People like being able to express themselves in creative ways… using technology such as 3D printing to make things for utility or entertainment,” – Stephanie Ludi
Testing and Feedback
A11yShape was tested in a lab setting with four participants, who had moderate to high levels of visual impairments and programming experience. To date, feedback from these users has been extremely enthusiastic. A11yShape’s descriptions scored consistently high, with average scores ranging from a 4.1 to a 5. These ratings are based on their geometric accuracy, clarity, and ability to avoid hallucinations.
These findings underscore that the AI system is trustworthy and reliable enough to be used in daily practice. One participant shared their experience, stating that A11yShape “provided the blind and low-vision community with a new perspective on 3D modeling, demonstrating that we can indeed create relatively simple structures.”
“On a 1–5 scale, the descriptions earned average scores between about 4.1 and 5 for geometric accuracy, clarity, and avoiding hallucinations, suggesting the AI is reliable enough for everyday use,” – research team
Future Developments
Meanwhile, A11yShape’s dev team is just as pleased with the development climate. They plan to include tactile displays, real-time 3D printing, and shorter AI-generated audio descriptions in future versions. These improvements are intended to make it even more accessible and usable for people with visual disabilities.
Liang He emphasized his commitment to creating meaningful tools for this community, stating, “I want to design something useful and practical for the group.” This dedication underscores the project’s intention to foster creativity and independence among blind programmers in the fields of technology and design.

