Autonomous Learning research group at the MPI-IS in Tübingen is awarded € 2 million for a period of 5 years.
IMPRS-IS Faculy Georg Martius, who leads the Autonomous Learning Group at the Max Planck Institute for Intelligent Systems in Tübingen, was awarded a Consolidator Grant by the European Research Council (ERC). Martius and his team will receive € 2 million for a period of five years starting January 1st, 2023.
“This grant is a game-changer for my research, as it allows me and my team to pursue our long-term research goal of creating versatile and dexterous robots,” says Martius.
Promising EU researchers, of any nationality, with seven to twelve years of experience since the completion of their Ph.D. are elligble to apply for an ERC Consolidator Grant. Receipents need to have published excellent works and propose a promising and exciting research direction. Martius has obtained this grant with the project entitled “Model-based Reinforcement Learning for Versatile Robots in the Real World” – REAL-RL in short. He and his team aim to broaden the research field of autonomous robots that learn from experience. By robots learning to solve new and challenging tasks independently, Martius and his team lay the foundation for machines to one day become ubiquitous assistants to humans.
Currently, robots are developed for a particular task and aren’t very versatile when assigned to perform something else. REAL-RL aims to solve these issues by utilizing a learning approach to robot control. The dominant direction in the field currently uses model-free reinforcement learning methods that need an incredible number of interactions with the world, which can be challenging or impossible for actual robots to achieve. As a bypass, simulations are often used; but this requires detailed knowledge of all possible situations that the robot might encounter. These problems are circumvented by REAL-RL’s model-based approach. Models of interaction with the world, created by learned experiences, will be used to plan and adapt behavior on the fly. This approach promises to be much more data-efficient and allows the transfer of valuable experience between tasks.
“By aiming at a generic learning method that can be used to control any robot – rigid or soft, with legs, arms, or whatever – and improving with experience, our team hopes to provide a solid basis for future robotic applications,” Martius concludes.