This is a selection of our research videos showing how robots learn various motor skills. More videos can be found on our YouTube channel.

Video introduction to the lab

The research activities of the Robot Intelligence Lab, presented by the Head of the lab - Dr Petar Kormushev

The robot in the video is DE NIRO --- Design Engineering's Natural Interaction Robot.

Robot DE NIRO is based on a Baxter robot that was heavily modified for the purpose of conducting research on mobile manipulation and robot learning.

DE NIRO crossing a road for the first time

This is the first time our robot DE NIRO has crossed a public road!

We managed to turn quite a few heads in South Kensington, next to Royal Albert Hall.

The noise is not from the robot, by the way. DE NIRO's mobile base is extremely quiet.

DE NIRO is alive!

DE NIRO (Design Engineering's Natural Interaction Robot) is the first robot in the Dyson School of Design Engineering, Imperial College London. The robot is a member of the Robot Intelligence Lab lead by Dr Petar Kormushev.

DE NIRO is based on a Baxter robot produced by Rethink Robotics that will be modified for conducting research on robot learning.

Robot Learns to Flip Pancakes

The video shows a Barrett WAM robot learning to flip pancakes by Reinforcement Learning. The motion is encoded in a mixture of basis force fields through an extension of Dynamic Movement Primitives (DMP) that represents the synergies across the different variables through stiffness matrices. An Inverse Dynamics controller with variable stiffness is used for movement reproduction. The skill is first demonstrated via kinesthetic teaching, and then refined by policy learning algorithm. It takes 50 trials for the robot to learn this skill.

Publication details here

Humanoid robot iCub learns the skill of archery

After being instructed how to hold the bow and release the arrow, the robot learns by itself to aim and shoot arrows at the target. It learns to hit the center of the target in only 8 trials. The learning algorithm, called ARCHER, was developed and optimized specifically for problems like the archery training, which have a smooth solution space and prior knowledge about the goal to be achieved. The ARCHER algorithm is used to modulate and coordinate the motion of the two hands, while an inverse kinematics controller is used for the arms motion.

Publication details here

Humanoid robot learns to clean a whiteboard

The Japanese humanoid robot Fujitsu HOAP-2 learns a surface cleaning task by imitation learning. The approach allows a free-standing, self-balancing humanoid robot to acquire new motor skills by kinesthetic teaching. The method controls simultaneously the upper and lower body of the robot with different control strategies. Imitation learning is used for training the upper body via kinesthetic teaching, while at the same time ankle/hip reaction motion patterns are used for keeping its balance.

Publication details here

Kinematic-free Position Control of a Robot Arm

A novel concept for kinematic-free control of a robot arm. It implements encoderless robot control that does not rely on any joint angle information or estimation and does not require any prior knowledge about the robot kinematics or dynamics. The approach works by generating actuation primitives and perceiving their effect on the robot's end-effector using an external camera, thereby building a local kinodynamic model of the robot. Notably, it can adapt even to drastic changes in the robot kinematics, such as 100% elongation of a link, 35-degree angular offset of a joint, and even a complete overhaul of the kinematics involving the addition of new joints and links.

Publication details here

More videos can be found on our YouTube channel.