Project Details
Description
This is a project to understand and improve how augmented reality can facilitate task performance by superimposing information in users' visual fields to help identify objects and features, direct actions, and provide spatial-temporal overviews and other perspectives. Augmented reality is a rapidly developing information display technology, using either head-mounted displays or handhelds incorporating cameras, to add information to the visual field of the user. The project will develop and test the best ways to facilitate performance with multiple people, perspectives, platforms, and tasks to convey the what, where, when, and how of action. The research will be conducted in the laboratory and in the field, including the historic El Barrio section of East Harlem. Project software is expected to significantly improve the ability of residents and visitors, alone and in groups, to learn about, plan visits to, and explore the neighborhood. In their personal and professional lives, people need to perform an enormous range of complex tasks, including navigation, maintenance, and assembly. Technology can make these tasks easier, and the use of augmented reality to interactively assist users by overlaying crucial missing information directly on a user's view is especially promising.
Each of the tasks studied in this research entails an organized sequence of actions with respect to features or objects in the world. The features or objects may be hard to find or even be occluded. The actions may be complex. Work is increasingly collaborative, requiring coordination with others who have different perspectives. Each action can depend on previous and subsequent actions. The research will integrate work in cognitive science and computer science to develop principled approaches for using the transformative technology of augmented reality to assist people in navigation, maintenance, assembly, and related tasks. It will expand our abilities in understanding how people represent, transform, and communicate space and the actions in it; in designing instructions; in developing systems to assist users in assembly, maintenance, and navigation; and in designing stationary, mobile, and wearable user interfaces that use graphics, multimedia, and augmented reality, both indoors and outdoors. The scientific results will enlighten the study of the communication of objects, actions, and data, instructional design, and user interface design. The techniques and systems developed will inform the design of future systems that can aid the general public for educational and recreational ends, as well as systems that can assist people with auditory, visual, or physical impairments. Navigation, maintenance, and assembly are representative of many important daily tasks. Software created by the project will be adaptable to different situations and displays, and made available to the public as open source. The project will train students, and will be carried out in part through collaborative interdisciplinary projects in courses in computer science and cognitive science.
Status | Finished |
---|---|
Effective start/end date | 9/1/15 → 8/31/19 |
Funding
- National Science Foundation: US$802,286.00
ASJC Scopus Subject Areas
- Computer Science(all)