The name of the base system (platform): | TensorFlow |
Developers: | |
Date of the premiere of the system: | March, 2020 |
Branches: | Information technologies |
Technology: | Development tools of applications |
2020: Start
At the end of March, 2020 Google issued the platform open source of SEED RL which allows to scale training of AI modules at thousands of computers. This solution, according to developers, allows to reduce costs on 80% thanks to what startups can create algorithms at the level of products of the large technology companies.
The framework of SEED RL is constructed on the TensorFlow 2.0 platform and uses a combination of graphic and tensor modules for centralization of model of a logical output. The output then is executed using a component which trains model.
This component as declare in Google, can be scaled on thousands of cores, and the number of the contractors providing iteration between accomplishment of steps in the training environment and accomplishment of an output of model for forecasting of the following action can be scaled to thousands of machines.
Google says that results of testing of SEED RL speak about considerable acceleration of training and as this approach is much cheaper, than use of graphic processors, and the cost of experiments falls significantly. According to developers, thanks to SEED RL training with a reinforcement had an opportunity to use the potential of accelerators on an equal basis with other methods of deep learning.
The analyst of Constellation Research Holger Müller (Holger Mueller) in a conversation with the SiliconANGLE edition noted that SEED RL looks as one more example of "training with a reinforcement" which, according to the expert, becomes one of most AI perspective methods for development of applications of the next generation.
The source codes SEED RL are laid out on the GitHub portal. In the same place examples of start of a framework in cloud infrastructure of Google Cloud with graphic processors are available.[1][2]