Title: A Visual-Auditory Presentation Model for Sequential Textual Information
Authors: Shuang Xu, Xiaowen Fang, Jacek Brzezinski, Susy S. Chan
Abstract: Based on Baddeley’s working model [3] and research on human attention, this study intends to design a visual-auditory information presentation to: (1) minimize the interference in information processing between the visual and auditory channels; and (2) improve the effectiveness of mental integration of information from different modalities. The Baddeley model suggests that imagery spatial information and verbal information can be concurrently held in different subsystems within human working memory. Accordingly, this research proposes a method to convert sequential textual information into its graphical and verbal representations and hypothesizes that this dual-modal presentation will result in superior comprehension performance and higher satisfaction as compared to pure textual display. Simple T-tests will be used to test the hypothesis. Results of this study will help to address usability problems associated with small-screen computers and the mobile information access via handheld devices. Findings may also benefit interface design of generic computer systems by alleviating the overabundance of information output in the visual channel.
Full Paper:  [doc]