Cassie, a biped robot developed by Agility Robotics, has learned to walk on its own thanks to a new method implemented by American researchers. Thanks to the combination of reinforcement learning and two virtual environments, the robot was able to practice walking and was able to transfer its learning to the real world.
The robots of Boston Dynamics surprise with their ability to move autonomously, and a recent video shows them dancing, a true demonstration of their agility. However, these robots took years of development to get there. Researchers at the University of California at Berkeley have opted for a new, faster method that allows a robot to learn to walk alone.
Cassie is a robot developed by Agility Robotics that comes down to a pair of legs. For the first time, the researchers used reinforcement learning to teach a robot to walk. However, Cassie was not able to learn directly in the real world without risking damage with many falls.
Autonomous learning through two successive simulations
So the researchers developed a three-step system, which starts with a virtual environment called MuJoCo and a simulation of the robot that has to learn to reproduce an entire library of movements. However, these kinds of simulations are not accurate enough for learning to be usable in the real world. To validate the results of the first training, they used a second virtual environment called SimMechanics. This one is much more detailed and requires much more computing power, and therefore does not work in real time.
Once the learning was transferred to the robot, they were able to test it in the real world without modifications. Cassie was able to walk standing or crouching on different surfaces, carry unexpected loads and compensate when pushed or tripped on an object. The robot was even able to continue walking after a failure of two engines in its right leg. Cassie is not yet ready to compete with Atlas or Spot from Boston Dynamics, but this progress should accelerate the development of new robots.