Quantcast

Stanford increasing access to 3D modeling through touch-based display

With the goal of increasing access to making, engineers at Stanford University have collaborated with members of the blind and visually impaired community to develop a touch-based display that mimics the geometry of 3D objects designed on a computer.

Video by Farrin Abbott

To make computer-aided design more accessible to people who are blind and visually impaired, Stanford researchers developed a display that can be paired with 3D design software to quickly produce touchable representations of a user’s work-in-progress.

Creating a 3D object with computer software is often the first step in producing it physically and can be burdensome for people who are blind or visually impaired. Even with 3D modeling software that has more accessible ways of inputting designs, they still have to evaluate their work by either creating a physical version they can touch or by listening to a description provided by a sighted person.

“Design tools empower users to create and contribute to society but, with every design choice, they also limit who can and cannot participate,” said Alexa Siu, a graduate student in mechanical engineering at Stanford, who developed, tested and refined the system featured in this research. “This project is about empowering a blind user to be able to design and create independently without relying on sighted mediators because that reduces creativity, agency and availability.”

This work is part of a larger effort within the lab of Sean Follmer, assistant professor of mechanical engineering, to develop tactile displays – displays that relay information through touch – for various purposes, such as human-computer interaction and new ways of sharing or explaining 3D information. Siu presented the current work on Oct. 29 at the International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS). Although the display she presented is a prototype, the lab hopes to make a version that is less expensive, larger and able to create shapes in greater detail.

Joshua Miele, co-author on the paper, is a blind scientist, designer and educator who helped develop the system while he was associate director of technology research and development at the Smith-Kettlewell Rehabilitation Engineering Research Center. “It opens up the possibility of blind people being, not just consumers of the benefits of fabrication technology, but agents in it, creating our own tools from 3D modeling environments that we would want or need – and having some hope of doing it in a timely manner,” he said.

Greater understanding

The display is reminiscent of a pin art toy in that it forms shapes from a field of tall, rectangular pegs that move up and down. By inputting the specifications of their desired shape in the accompanying 3D modeling program, users can evaluate their creation via the touchable display. Whenever they alter the shape, they can command the display to render it anew. This tactile display is considered 2.5D rather than 3D because the bottom of the display doesn’t change shape.

The researchers co-designed this system with people who are blind or visually impaired, a process that was integral to making it address the actual needs of its users. In the end, the team produced a system that can rotate a 3D model, zoom in and zoom out on an object, and show it in split sections – such as showing the top and bottom of a cup beside each other. Users can also feel the shape with multiple fingers or their whole hand, which enhances the information they can interpret from the display.

“What really is so awesome is that I can view various perspectives of the object and not just the object in its single state,” said Son Kim, an assistive technology specialist for the Vista Center for the Blind in Palo Alto and co-author of the paper. “That offers greater dimension to understanding the object that you’re attempting to make. And that’s the same opportunity that a sighted peer would have, where they too would be able to view various perspectives of their target object.”

The team had five people who were blind or visually impaired test the platform and the system received very positive feedback, including requests from the users to keep the models they created during testing.

“Personally, I believe that access to tools and access to making is something that’s incredibly important and incredibly powerful,” said Follmer, who is senior author of the paper. “So to hear about the types of devices and objects and 3D models that they wanted to create was the most exciting part.”

Scale and resolution

With the success of this early-stage process of co-design and testing, the researchers would like to improve the scale, affordability and resolution of the pin display – currently, each pin is rather large, so the display can’t show much detail.

“The feedback we received showed that, even with this coarse display, we can still get meaningful interactions,” said Siu. “That suggests there’s a lot of potential in the future for this kind of system.”

The researchers would also like to explore alternatives to the software program, which requires some programming skills and is reliant on text-based communication. One option may be a system where users physically adjust the pins, which causes the code to change and match what they formed.

“I really am excited about this project,” said Kim. “If it moves toward implementation or mass distribution in such a way that is cost-effective that would enable future, visually-impaired or blind designers coming out of college to have a tool, which would give that person or persons the level of accessibility to enhance their learning; it contributes to the principle of individual, universal access and promotes independence.”

A more affordable design

Another graduate student in the Follmer lab, Kai Zhang, is focused on creating a less expensive, higher resolution version of the system that involves much smaller pins. Zhang and colleagues have designed and tested a small shape display, detailed in a paper published in IEEE Transactions on Haptics, with pins 1/16-inch (1.6 mm) wide and raw materials that cost $0.11 per pin – a significant reduction in cost compared to other displays.

Video by Stanford SHAPE Lab

“High cost is a major limitation for many existing tactile displays,” explained Zhang. “If a tactile shape display can be built with a price comparable to average consumer electronics – like a smartphone or laptop – it can be widely used for visually impaired people and other applications, such as virtual reality, telepresence and human-computer interaction.”

Other 2.5D tactile displays have one motor per pin but this design has a single motorized platform that raises all the pins in unison. Then, as the platform moves back down, brakes clutch the pins in the desired position through electrostatic adhesion – similar to what happens after you rub a balloon on your hair.

The display is a small, 4 pin-by-2 pin array for testing purposes and the platform must rise and fall any time a user wants to render a new shape. The researchers are working to create a larger version that can render between shapes without having to reset the platform, which will depend on similar electrostatic mechanisms and inexpensive electronics.




The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.