My research explored details of human psychology in terms of analyzing visual and auditory perception, rapid aiming movement and problem solving strategy in the context of human machine interaction. I also worked on real time sensor data fusion from different infrared and gyroscopic sensors and invented new algorithm to predict target for rapid aiming movements originated from hand, head or eye gaze movement. My research concerns about extreme human machine interaction ranging from investigating interaction issues for people with severe speech and motor impairment (SSMI) to proposing new interaction techniques for aircraft pilots and automotive drivers, where the context itself imposes situational impairment. I aim to make user interfaces of modern electronic devices accessible and intuitive to elderly and disabled users as well as to their able-bodied counterpart in different contexts of use.
From 2019 onwards, I worked on Augmented, Virtual and Mixed Reality systems and set up the AR/VR teaching lab for the Advanced Manufacturing program at IISc. My research integrated deep learning and IoT modules with AR/VR environment and developing digital twins for smart manufacturing set up at DHI Common Engineering Facility Centre (CEFC), IISc and British Telecom office spaces at Bangalore and Adastral park as part of the British Telecom India Research Centre(BTIRC).
From 2013 onwards, I was working on developing eye gaze controlled intelligent interaction technique and interfaces for limited mobility environment. My research on military aviation was funded by British Aerospace Systems (BAES) while I was at Cambridge University (2013-2015) and later by Aeronautical Research and Development Board (ARDB), Aeronautical Development Agency (ADA) and more recently by Indian Space Research Organization (ISRO). Presently, I am involved in cockpit design of advanced military aviation platforms and the maiden Human Space Flight Mission (Gaganayan). In 2020, UK’s New Scientist magazine covered my research on eye gaze controlled interfaces for military aviation.
My research team has also worked on developing various eye gaze controlled systems for users with SSMI through a DST Early Career Fellowship (2017), DBT BBI2 project, Microsoft AI for Accessibility Grant Award (2018) and Facebook Responsible AR/VR grant award (2021). We have developed an Augmentative and Alternative Communication Aid with instant messaging facility, virtual keyboard, edutainment games and a robot control interfaces. The eye gaze controlled robotic manipulator got significant media attention in 2020 at various media outlets like NDTV, All India Radio, TechExplore, Disability Insider magazines and so on. The IISc Director appointed me as a convener for the Nahar Centre for Robotics and Prototyping at IISc.
I invented an eye gaze and gesture controlled interactive Head Up Display (HUD) for cars and cognitive load estimation system for drivers. My research was initially funded by Jaguar Land Rover (JLR) while I was at Cambridge (2015-2016) and later by Faurecia Services Groupe (2018 -2021). I am also investigating human machine interaction issues for semi-autonomous vehicles as part of the WIRIN (Wipro-IISc Research Network) and DST Technology Innovation Hub (TIH) on IoT and Autonomous vehicles. I am mentoring a team of my PhD students that was shortlisted to the next round of Toycathon and I am member of the Aham team that was selected to semi-final of $10M ANA XPrize Challenge.
From 2010 onwards, I worked on developing user model that helps to reflect problems faced by people with different range of abilities. The user model combines research in the fields of medical science and psychology with artificial intelligence and machine learning models. It is implemented through a simulator and a set of web services. The simulator is developed to help designers understand, visualize and measure effect of impairment on designs. It promotes a user centred design process where designs, even paper-pencil drawings can be evaluated with respect to range of abilities of users before implementation. The simulator was calibrated with eye-gaze tracking data from people with visual impairment, audiogram data from people with hearing impairment and hand strength data from people with motor impairment. The user modelling web services are developed using the simulator. Users can store their profile in a web server and it is carried with them irrespective of device and application. The profile helps to adapt interfaces of both desktop and web-based applications. This inclusive user modelling system has found applications in a wide variety of systems including a digital TV framework for elderly users, a weather monitoring system, Augmentative and Alternative Communication Aid for spastic children, an electronic agricultural advisory system and a smartphone based emergency warning system. My research was selected for publication at the University of Cambridge website and UK’s Department of Health report on Research and Development in assistive technology In 2020, the International Telecommunication Union (ITU), which is the Telecom agency of United Nations, approved a new work item on setting up a Common User Profile format that I am drafting as a lead editor based on my research on user modelling