Research

  • Our present research is investigating new modalities of interaction with electronic systems to improve human machine interaction. We have developed new calibration, target prediction and multimodal fusion algorithms to improve accuracy and response times of gaze controlled interfaces. We are investigating automotive and military aviation environments and trying to improve human machine interaction for secondary mission control tasks. In parallel, we are working on detecting users’ cognitive load by analysing their ocular parameters using low cost off-the shelf eye gaze trackers. We also worked with users with severe speech and motor impairment and developed various applications to help them to engage with society. Various MDes projects developed mechatronics and cyber physical systems and investigated different application areas like smart manufacturing, automotive and drone related systems.
  • In automotive domain, we are working on both facilitating HMI while operating a dashboard and on developing technology to automatically detect distraction of drivers. Our patent on Multimodal Gaze Controlled Interactive HUD proposes a Head Up Display that projects the dashboard display on windscreen, which does not require drivers taking off eyes from road while undertaking secondary tasks and the display can itself be operated using eye gaze and finger movement of drivers. Our driver distraction detection system is investigating different sensors to track head and eye gaze movements and using machine learning systems for relating those to complexity of driving tasks
  • Our research on automotive domain was easily exploited for military aviation, in particular for fast jets. We have an operational flight simulator in our lab, which is integrated to a Multimodal Gaze Controlled HUD and HMDS. We invented new target prediction technologies for eye gaze tracking systems to reduce pointing and selection times in the HUD and HMDS. The HUD can also be integrated to Brain Computer and Voice controlled systems and we have also integrated a gaze controlled system with a high-end flight simulator at the National Aerospace Laboratory and collected data in combat aircrafts for cognitive load estimmation of pilots
  • We also actively pursue research on Assistive Technology and work with spastic children. Earlier, we investigated eye-gaze tracking data from people with visual impairment, audiogram data from people with hearing impairment and hand strength data from people with motor impairment and developed the inclusive user modelling system to promote inclusive design of electronic interfaces. The system found applications in a wide variety of systems including a digital TV framework for elderly users, a weather monitoring system, Augmentative and Alternative Communication Aid for spastic children, an electronic agricultural advisory system and a smartphone based emergency warning system.
  • With a funding from Microsoft Research and a DST SERB early career fellowship, we are now developing gaze controlled assistive systems for students with severe speech and motor impairment. Our initial study found significant difference in visual search patterns for operating a graphical user interface between users with cerebral palsy and their able bodied counterpart. Presently we are developing and evaluating gaze controlled intelligent user interfaces for augmentative and alternative communication aid and edutainment systems. Recent research is also developing multilingual virtual keyboards for Indian languages.
  • For various projects, we are developing various interactive AR and VR applications. We took help from research in AI and Multimodal Interaction to integrate non-traditional modalities like eye gaze and gesture recognition and developed various products for automotive, manufacturing and aerospace clients. A set of systems also turned useful for people with mobility impairment. Users with severe speech and motor impairment could undertake representative pick and drop task using our eye gaze controlled robotic arm. Our recent work compared different computer graphics and machine learning algorithms and integrated a webcam based eye gaze tracker with a robotic manipulator through a video see through display
  • With a seed funding from Bosch, we initiated research on smart manufacturing. Presently we are developing IoT modules for environment and posture tracking. We have developed a visualization module that displays both spatial and temporal data and by holding a semi-transparent screen in front of an electronic screen and a VR sensor dashboard, a user can augment spatial information with temporal trend or vice versa. The system also supports interaction through an interactive laser pointer, AR based Android application and is integrated to an early warning system that automatically sends a Gmail if it senses any deviation in sensor readings in the contexts of environment tracking. Presently we are working with British Telecom to develop a digital twin of their office space with interactive sensor dashboards.
  • We work on both automated ground vehicle and drones. Our aims to integrate information about on board passengers into the existing situational awareness process to enhance safety and comfort of autonomous vehicle. Our MDes students developed a modular drone payload that has multiple onboard cameras, sensors and transmitters that can send information to a base station without requiring any electronic integration with the drone. We are also working on comparing and developing CNN model for Indian road conditions detecting irregular traffic participants.
  • We are developing a PCB (printed circuit board) inspection system that can detect correct orientation and type of IC chips using webcam and shape detection algorithms. It is also integrated with a projection system to indicate the correct position of the IC in a PCB. Our system can be used without any CAD description of the IC chip and based on classical computer vison based image processing algorithm that does not require prior training. The project is part of a bigger work on comparing different shape detection features and algorithms in cluttered background under different lighting conditions and camera resolutions.
  • We have worked with a local start-up company to develop technology for helping cricket players and coaches. We validated a bespoke wireless IMU sensor attached to a cricket bat developed by StanceBeam using Optitrack Flex 13 camera system. We compared both orientation and speed of a cricket bat while executing shots using StanceBeam IMU sensor and Optitrack system. Later we tried to simultaneously track eye gaze of a batsman and integrate measurements of bat and eye gaze movements together.
  • In response to the Covid 19 pandemic, we have developed a website that automatically divides the duration of spread of the disease based on rate of increase in new cases, and shows a set of three graphs which are easier to interpret and extrapolate than a single exponential graph. The shape of the graphs (like linear, parabolic or exponential) can be compared at different stages and countries with respect to the average number of new cases and deaths. The system also generates a set of comparative graphs to compare among different countries and states. The right hand side video shows an application of the website with automatic speech recognition and text to speech systems.