Robotics experts are accustomed to studying the movement and structure of animals to design their new devices. Google did nothing different, but the result attracts attention.
He used imitation learning to teach autonomous robots how to walk, turn and move more agilely.
In the process they used a set of movement capture thanks to the installation of several sensors connected to a dog. In this way they “taught” a quadruped robot several different movements, difficult to achieve with traditional hand programming.
They used the real dog movement data to build simulations of each maneuver, including a dog trot and a side step, as they comment in Technology review. Then they combined the key joints of the simulated dog and the robot to make the simulated robot move exactly the same way as the animal.
With reinforcement learning, they learned to stabilize movements and correct differences in distribution and weight design.
By implanting it in the physical robot in the laboratory, they had to correct some movements, but a lot of work was already done.
Teaching robots The complex and agile movements needed to navigate the real world is a true challenge, and with imitation learning of this type we can make machines more quickly and can get to sites where the human being does not get, as In areas of collapse, holes in caves and much more.
Even so, there are challenges to overcome, since the weight of the robot limits its ability to learn certain maneuvers, such as great jumps or rapid races. In addition, capturing data from the animal movement sensor is not always possible, it is expensive and requires the cooperation of the animal (when it is a dog, but a great feline does not collaborate much).