• Human Computer Interaction
  • Graph Grammar

We thank NSF for funding the Natural User Interaction Lab (NUI lab). Under grant CNS-1126570, NUI lab is equipped with a variety of state-of-the-art hardware, including Microsoft Tabletop, Surface, Nexus tablet and smartphone and various sensors. Built on those hardware devices, our research focuses on investigating user-friendly multi-device/medium interaction that involves various devices/media and sensors. Especially, the following projects have been conducted under grant CNS-1126570.

 

MobiSurf: A Framework for Bimanual Inter-Device Interactions

A shared interactive display (e.g., a tabletop) provides a large space for collaborative interactions. However, a public display lacks a private space for accessing sensitive information. On the other hand, a mobile device offers a private display and a variety of modalities for personal applications, but it is limited by a small screen. We have developed a framework that supports fluid and seamless interactions among a tabletop and multiple mobile devices. This framework can continuously track each user's action (e.g., hand movements or gestures) on top of a tabletop and then automatically generate a unique personal interface on an associated mobile device. This type of inter-device interactions integrates a collaborative workspace (i.e., a tabletop) and a private area (i.e., a mobile device) with multimodal feedback. To support this interaction style, an event-driven architecture is applied to implement the framework on the Microsoft PixelSense tabletop. This framework hides the details of user tracking and inter-device communications. Thus, interface designers can focus on the development of domain-specific interactions by mapping user's actions on a tabletop to a personal interface on his/her mobile device. The results from two different studies justify the usability of the proposed interaction.

 

 

Full paper: http://www.sciencedirect.com/science/article/pii/S1045926X14000950

 

The Zigzag Paradigm:? A new P300-based Brain Computer Interface

Brain Computer Interfaces (BCI) are used to translate the input of Electroencephalogram (EEG), digitally-recorded via electrodes on the user's scalp, into output commands that control external devices. A P300 speller is based upon visual Event-Related Potentials (ERPs) in response to stimulation, as derived from EEG, and is used to type on a computer screen. The Row-Column speller Paradigm (RCP), utilizing a 6-by-6 character matrix, has been a widely-used successful P300 speller, despite inherent problems of adjacency, crowding, and fatigue. RCP is compared here with a new P300 speller interface, the Zigzag paradigm (ZP). In the ZP interface every second row of the 6-by-6 character matrix is offset to the right by d/2 cm, where d cm is the horizontal distance between two adjacent characters.? This shifting addressed the adjacency problem by increasing the distance between most adjacent characters. This shifting also addressed the crowding problem, for most characters, by reducing the number of other characters surrounding a character; critically the target character. A user study upon neurologically-normal individuals revealed significant improvements in online classification performance with the ZP, as supported the view that ZP effectively addressed adjacency and crowding problems. Subjective ratings also revealed that the ZP was more comfortable and caused less fatigue, as demonstrated that ZP offered a solution to the fatigue problem. Theoretical and practical implications of the applicability of the ZP for patients with neuromuscular diseases are discussed.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Full paper: http://dl.acm.org/citation.cfm?id=2522880&dl=ACM&coll=DL&CFID=515531707&CFTOKEN=42756545

 

 

PhoneLens: A Low-Cost, Spatially Aware, Mobile-Interaction Device

Large paper sheets are still the most preferred medium used by engineers to inspect remote sites. However, those paper documents are hard to modify and retrieve. This paper presents a novel, spatially aware, mobile system (called PhoneLens) which combines the merits of paper documents and mobile devices. It augments paper documents with digital information. Different from previous approaches, PhoneLens is inexpensive. It includes two infrared LEDs (i.e., light-emitting diodes), one Wiimote, and one Android device. Based on the hardware setting, we developed an efficient spatial-tracking algorithm to record the movement of a mobile device within a large workspace. Our approach is robust and applicable to various scenarios. PhoneLens provides different functions to browse a multivalent document, such as browsing different layers, searching annotations, and zooming. We conducted a controlled study that compared the participants' performance with PhoneLens against a traditional paper-based method with a multivalent paper document at a significance level of p < 0.05. The following results were obtained from the study; (1) PhoneLens was significantly more efficient than the paper-based method on search and measurement tasks; (2) PhoneLens was rated higher on subjects' overall experience than the paper-based method; and (3) the usefulness of the training on PhoneLens was positively correlated with subjects' browsing efficiency.

 

 

Full paper: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6750026&tag=1

 

 

Video: PhoneLens

 

SmartCamera: A Low-cost and Intelligent Camera Management System

Intelligent camera management systems were developed to automatically record meetings for videoconferencing. These systems provided many benefits, such as reducing the production cost and conveniently documenting events. However, automatically recorded videos in general were not visually engaging. This paper presents a novel approach that intelligently controls camera shots and angles to improve the visual interest. We use 3D infrared images captured by a Kinect sensor to recognize active speakers and their positions in a meeting. A movable camera, constructed by placing a wireless PTZ (pan-tilt-zoom) camera on top of a motorized rail, can automatically move its position to frame an active speaker in the center of the screen. Without interrupting the meeting, a speaker can seamlessly switch video sources through gesture-based com-mands. We have summarized and implemented a set of heuristic rules to simulate a human director. These rules can be visually edited through a graphical user interface. The customization of a virtual director makes our system applicable in various scenarios. We conducted a user study, and the evaluation results justified the quality of an automated video.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Full paper: SmartCamera

 

An early version of video: SmartCamera

 

 

 

 

 

 

Coming Soon

 


 

 

 

.