Abstract

There are a large number of highly structured documents available on the Internet. The logical document structure is very important for the reader in order to efficiently handling the document content. In graphical user interfaces, each logical structure element is presented by a specific visualisation, a graphical icon. This representation allows visual readers to recognise the structure at a glance. Another advantage is that it enables direct navigation and manipulation. Blind and visually impaired persons are unable to use graphical user interfaces and for the emerging category of mobile and wearable devices, where there are only small visual displays available or no visual display at all, a non-visual alternative is required too. A multi-modal user interface for non-visual presentation, navigation and manipulation of structured documents on mobile and wearable devices like smart phones, smart watches or smart tablets has been developed as a result of inductive research among 205 blind and visually impaired participants. It enables the user to get a fast overview over the document structure and to efficiently skim and scan over the document content by identifying the type, level, position, length, relationship and content text of each element as well as to focus, select, activate, move, remove and insert structure elements or text. These interactions are presented in a non-visual way using Earcons, Tactons and synthetic speech utterances, serving the auditory and tactile human sense. Navigation and manipulation is provided by using the multitouch, motion (linear acceleration and rotation) or speech recognition input modality. It is a complete solution for reading, creating and editing structured documents in a non-visual way. There is no special hardware required. The name DOKY is derived from a short form of the terms document, and accessibility. A flexible platform-independent and event-driven software architecture implementing the DOKY user interface as well as the automated structured observation research method employed for the investigation into the effectiveness of the proposed user interface has been presented. Because it is platform- and language-neutral, it can be used in a wide variety of platforms, environments and applications for mobile and wearable devices. Each component is defined by interfaces and abstract classes only, so that it can be easily changed or extended, and grouped in a semantically self-containing package. An investigation into the effectiveness of the proposed DOKY user interface has been carried out to see whether the proposed user interface design concepts and user interaction design concepts are effective means for non-visual presentation, navigation and manipulation of structured documents on mobile and wearable devices, by automated structured observations of 876 blind and visually impaired research subjects performing 19 exercises among a highly structured example document using the DOKY Structured Observation App on their own mobile or wearable device remotely over the Internet. The results showed that the proposed user interface design concepts for presentation and navigation and the user interaction design concepts for manipulation are effective and that their effectiveness depends on the input modality and hardware device employed as well as on the use of screen readers.

Keywords

User Interface, Multi-Modal, Non-Visual, Structured Document, Accessibility, Assistive Technology, Presentation, Navigation, Manipulation, Earcon, Tacton, Multi-Touch, Motion, Gestures, Mobile Device, Smart Phone, Smart Tablet, Smart Watch, Wearable Device, Blind, Visual Impairment

Document Type

Thesis

Publication Date

2019

Share

COinS