Developers: | UC Berkeley |
Date of the premiere of the system: | November 2024 |
Branches: | Information Technology |
Technology: | Robotics |
Content |
History
2024: Product Announcement
In late November 2024, scientists at the University of California, Berkeley, unveiled a system to allow robots to exchange skills without human input. The RoVi-Aug platform generates synthetic visual demonstrations using different types of robots and shooting angles, increasing the versatility of learning.
Unlike other platforms, RoVi-Aug skips additional testing steps, adapts policies, and learns the tasks of multiple robots at once. This improves the efficiency of skills transfer and increases the success rate of training by 30%. The new approach marks a significant step forward in the development of more independent and adaptive robots, according to the team of researchers.
Research shows that the data scaling technique helps robots learn common and specialized skills. However, the data for robot training is much smaller than those used in advanced language and visual AI models. Collecting diverse and useful data on real robots is a slow, time-consuming and complex process needed to optimize the training of new robots. Projects like Open-X Embodiment (OXE) combine information from 60 robot databases to improve learning efficiency, helping robots share experiences and improve their capabilities.
In turn, RoVi-Aug uses these datasets to create visual demonstrations that vary depending on the type of robot and camera angles. The platform includes two main parts: a robot augmentation module (Ro-Aug), which generates demonstrations with different robotic systems, and a viewpoint augmentation module (Vi-Aug), which simulates demonstrations from different camera angles.[1]