In our laboratory, we perform research towards intelligent robotic devices that interact with their users, learn from them, and adapt their assistance to maximise their users' physical, cognitive and social well-being. Our research spans several topic areas, including machine learning, user modelling, cognitive architectures, human action analysis, and shared control. We are aiming at advancing fundamental theoretical concepts in these fields, but without ignoring the engineering challenges of the real world, so our experiments involve real robots, real humans, and real tasks. Do feel free to contact us if you have any queries, are interested in joining us as a student or a researcher, or have a great idea for scientific collaboration.

Example applications of our research

Research Themes

Adaptive Cognitive Architectures for Human Robot Interaction

Over the past 15 years, we have been developing a core distributed cognitive architecture for understanding human actions, predicting the intention behind them, and if needed, generating assistance to ensure that the human achieve their desired intention. The core of our architecture relies on learned hierarchical ensembles of inverse and forward models that predict future states of an observed system. In order to ensure scalability, and real-time operation on embedded devices (such as our robots), we employ an attention mechanism that distributes the computational and sensorimotor resources of the robotic device in an optimal manner.

We have evaluated this architecture in many diverse tasks including human-robot collaboration, multiagent computer games, intelligent robotic wheelchairs for disabled adults and children, collaborative music generation, physical education tasks (eg. dance) and multirobot coordination and control. We generally use the label HAMMER (for “Hierarchical Attentive Multiple Models for Execution and Recognition”) to describe this architectural approach, with the implementation details of the inverse/forward models used frequently optimised for the particular task.

Key Publications:

Assistive Robotics

We design and implement algorithms for robots which can assist humans in their daily lifes. Specifically, we are interested in adaptive robots, where we first build user models, and employ these models in order to give personalised assistance. The main focus of our research is robot-assisted dressing.

More information can be found on Personalized Robot-assisted Dressing.

Key Publications:

  • Zhang F, Cully A, Demiris Y (2017). Personalized Robot-assisted Dressing using Hierarchical Multi-task Control and User Modeling. International Conference on Intelligent Robots and Systems (IROS 2017).
  • Gao Y, Chang HJ, Demiris Y, (2016). Iterative Path Optimisation for Personalised Dressing Assistance using Vision and Force Information, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016)
  • Gao Y, Chang HJ, Demiris Y (2015), User Modelling for Personalised Dressing Assistance by Humanoid RobotsIEEE International Conference on Intelligent Robots and Systems (IROS 2015), pp:1840-1845.

Hierarchical Task Representations and Machine Learning

When you observe a person performing an action, there are multiple levels of abstraction that you can use to describe what they are doing. You can describe the trajectories of their body parts, the objects they are using, and/or the effects they are having on their environment. Additionally, if you observe them long enough, you might notice particular usage patterns, traits, and preferences. We are performing research in algorithms for learning task representations that accommodate these abstraction levels. Our published work includes representations at the trajectory level using statistical methods (including gaussian processes, quantum statistics, Dirichlet processes, among others), neural networks (including reservoir computing algorithms), and linguistic approaches (for example stochastic context free grammars).

Key Publications:

In-vehicle Intelligent Systems

We design and implement algorithms for modelling the user's behaviour during driving. Our aim is to provide personal assistance and training as well as predict and avoid forthcoming critical situations.


Key Publications:

Robot Vision

Overview Robot Vision

Building kinematic structures of articulated objects from visual input data is an active research topic in computer vision and robotics. The accurately estimated kinematic structure represents motion properties as well as shape information of an object in a topological manner, and it encodes relationships between rigid body parts connected by kinematic joints.

Accurate and efficient estimation of kinematic correspondences between heterogeneous objects is beneficial in the computer vision and robotic fields for many high level tasks such as learning by imitation, human motion retargeting to robots, human action recognition from different sensors, viewpoint invariant human action recognition by 3D skeletons, behaviour discovery and alignment, affordance based object/tool categorisation, body scheme learning for robotic manipulators, and articulated object manipulation. Therefore, in our lab we focus on estimating accurate kinematic structures and finding correspondences between two articulated kinematic structures extracted from different objects.

Key Publications:

Sensorimotor Self and Mirroring

Overview Sensorimotor self and Mirroring

The aim in this research theme is to design and implement efficient learning algorithms for self-exploration and body-schema building in humanoid robots, and for the understanding the different environmental objects. These features are embedded into a cognitive developmental framework in order for the robot to acquire a mirror system by bootstrapping the understanding of the actions of others based on the learned model of the self.

Key Publications:

Shared Control & Haptic Telepresence

One of the important lines of research in our lab is concerned with how the control system of the robotic device can be shared between a human collaborator and a sensor-based autonomous decision-making process, so that the final outcome takes advantage of the strengths of both. Our shared (or collaborative) control methods receive input from the human collaborator, calculate the current environmental state, form predictions regarding the the intention of the user as well as the expected outcome from the current control commands, and generate assistance (complementary control signals) when and only if needed. We have applied these shared control methods in multiple domains, including shared control of robotic wheelchairs for the elderly, and disabled children and adults, as well as in high-performance scenarios such as F1 racing.

Key Publications:

Related European Research Projects

Example applications of our research

Research Sponsors and Industrial Collaborators

Example applications of our research