My research has explored details of human psychology in forms of analyzing visual and auditory perception, rapid aiming movement and problem solving strategy in the context of human machine interaction. I also worked on real time sensor data fusion from different infrared and gyroscopic sensors and invented new algorithm to predict target for rapid aiming movements originated from hand, head or eye gaze movement. My research concerns about extreme human machine interaction ranging from investigating interaction issues for people with severe physical impairment to proposing new interaction techniques for aircraft pilots and automotive drivers, where the context itself imposes situational impairment. I aim to make user interfaces of modern electronic devices accessible and intuitive to elderly and disabled users as well as to their able-bodied counterpart in different contexts of use.
In particular, I work on developing user model that helps to reflect problems faced by people with different range of abilities. The user model combines research in the fields of medical science and psychology with artificial intelligence and machine learning models. It is implemented through a simulator and a set of web services. The simulator is developed to help designers understand, visualize and measure effect of impairment on designs. It promotes a user centred design process where designs, even paper-pencil drawings can be evaluated with respect to range of abilities of users before implementation. The simulator was calibrated with eye-gaze tracking data from people with visual impairment, audiogram data from people with hearing impairment and hand strength data from people with motor impairment. The user modelling web services are developed using the simulator. Users can store their profile in a web server and it is carried with them irrespective of device and application. The profile helps to adapt interfaces of both desktop and web-based applications. This inclusive user modelling system has found applications in a wide variety of systems including a digital TV framework for elderly users, a weather monitoring system, Augmentative and Alternative Communication Aid for spastic children, an electronic agricultural advisory system and a smartphone based emergency warning system.
In parallel, I pursue research on new modalities of interaction and developing multi-modal systems. My research is investigating optimum combination and techniques for integrating multiple input modalities for assistive technology, automotive and aviation environments. I have success in combining single switch scanning technique with eye-gaze tracking based system and developing new target prediction model for eye-gaze and head-movement tracking based systems. These systems found important applications for pilots of combat aircrafts as well as elderly Indian users, who can even complete pointing and selection tasks faster with my eye-gaze tracking system than conventional computer mouse. I have explored use of the system in automotive environment and integrating the eye gaze tracking system with joystick and leap motion controller. I am also working on eye gaze tracking based information visualization systems, in particular to large scale spatial data processing like map browsing and face searching.
Cognitive Load Detection
My recent research compared different ocular parameters to detect cognitive load in the context of human machine interaction. In particular, I have explored use of Saccadic Intrusion to detect drivers’ cognitive load and instantaneous perception of developing road hazards. Saccadic Intrusion is a type of eye gaze movement which was earlier found to be related to change in cognitive load. I have developed an algorithm to detect saccadic intrusion from a commercially available low-cost eye gaze tracker and conducted a series of user studies involving a driving simulator and cognitive and hazard perception tests. Results show that average velocities of saccadic intrusion increases with increase in cognitive load and recording saccadic intrusion and eye blinks for 6 seconds duration can predict drivers’ instantaneous perception of developing road hazards.
This research has been disseminated through 5 books and over 60 journal and conference papers in Human Computer Interaction and Assistive Technology and also being cited in reputed journals and conference proceedings like ACM CHI, TACCESS, Frontiers in Neuroprosthetics and ACM ASSETS. At Cambridge, I worked with Jaguar Land Rover funded MATSA and DIPBLAD projects till March 2016 and earlier successfully completed the EU funded Gentle User Interfaces for Disabled and Elderly (EU GUIDE) project, the BAE systems funded Evolutionary Human Machine Interfaces (EVO-HMI) and EPSRC funded India UK Advanced Technology Centre of Excellence (IUATC) projects. The IUATC project is one of the biggest ICT initiatives for UK-India collaborative funding. My research was selected for publication at the University of Cambridge website and UK’s Department of Health report on Research and Development in assistive technology
Besides the previous projects, I used to collaborate with industrial designers and developers in the process of writing international standards. I am Vice Chairman at the International Telecommunication Union (ITU-T, Telecom branch of UNO) Focus Group on Smart TV and Working Group Coordinator at ITU-T Focus Group on Audio Visual Media Accessibility. I worked on drafting two international standards to promote accessibility in Audio Visual Media, particularly in Smart TV. The focus group on Smart TV has already published my work in the form of an ITU report.