[an error occurred while processing the directive]
RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2

AlphaGo

Product
The name of the base system (platform): Artificial intelligence (AI, Artificial intelligence, AI)
Developers: Google DeepMind (DeepMind Technologies)
Date of the premiere of the system: 2014
Last Release Date: 2017/10/19
Branches: Show business, leisure, sport

Content

AlphaGo is the program for a game in go on the basis of AI technologies.

AlphaGo uses the methods applied to image identification, assessment of a position and the choice of the most profitable courses for this position - deep training using convolution neural networks for the organization of two neural networks: strategic network (engl. policy network) which helps to reduce number of the considered courses in each position, and estimated network (engl. value network) she helps to estimate a position, without browsing a game up to the end.

For assessment of forces of the program, creators organized a tournament between AlphaGo and the best free and commercial programs for a game in go (Crazy Stone, Zen, Pachi, Fuego) which used the Monte-Carlo method, and GnuGo which was the best free program before use of the Monte-Carlo method. AlphaGo won 494 matches from 495.

Guo — one of the most ancient board games. Up to recent time was considered that the computer is not capable to play as equals with the professional player because of the high level of abstraction and impossibility of search of all available options of succession of events — precisely number of admissible combinations in a game on a standard goban more, than number of atoms in the observed Universe.

2017

AlphaGo Zero — the self-trained algorithm is developed for a game in go

On October 19, 2017 the DeepMind company announced development of the upgraded version of an algorithm AlphaGo for a game in go.

The latest version of an algorithm which received the Zero prefix to the name leaves no chance for people on a victory. The technology which is completely self-trained it is capable to learn strategy, without analyzing the batches played by the person[1].

Work of AlphaGo Zero is based on artificial neural networks, but is programmed differently. Original AlphaGo is programmed to study skill of a game in go acquisition of experience from games with people.

More perfect version of AlphaGo Zero consists of one neural network. Explained to her what represents the main attribute of a game – a board. Including rules, she learned all the rest independently. Without studying at the batches played by people, AlphaGo Zero studied for own. She began with the senseless courses, but after 4.9 million batches learned a game so that she managed to beat original AlphaGo dry.

According to the statement of DeepMind, such approach allowed to save artificial intelligence from restrictions of human mind. At the same time application of the self-trained neural network used for AlphaGo Zero creation will not be limited to board games. Believe in DeepMind that this approach can be applied to the solution of more wide range of complex tasks which have similar properties with a Go game, like problems of planning, or in situations in which it is necessary to take a number of actions in the correct sequence (laying of protein or reduction of energy consumption).

AlphaGo won the last game against Ke Jie and left from go

The DeepMind company belonging to Alphabet announced in April, 2017 the planned AlphaGo program match against Ke Jie, the strongest in the world of the player in go.

Staff of DeepMind intends to play AlphaGo match against Ke Jie, the strongest in the world of the player in go according to the independent rating of Go Ratings. In go there is no official World Cup therefore it is impossible to become the world champion on go, however considering victories of players in different tournaments it is possible to define the actual strongest player who Ke Jie is at the moment with a high accuracy.

Within the festival go which will take place from May 23 to May 27 in the Chinese city of Vuzhen (Province of Zhejiang), it is going to play a match from three games AlphaGo against Ke Jie. Also organizers of a festival are going to use AI and in other formats of games — in particular, to professional players will suggest to play against each other, but each player in a command will have the computer workmate. Besides, it is supposed to play a match of "AlphaGo against a team of five strongest players of China"[2].

In May, 2017 the strongest player in guo Ke Jie from China lost the second game to the AlphaGo program. Thus, AlphaGo ensured a victory in a tournament from three batches. Ke Jie, the experts monitoring a match noted, "ideally" began a batch, creating combinations, difficult for the rival, on all game field. However AlphaGo was succeeded to simplify a game and to achieve a victory.


In the third game against Ke Jie's AlphaGo played white stones. After nearly three and a half hours of a game the Chinese professional gave up though he had even more than 32 minutes on considering of the courses. Thus, the program won three games from three. Representatives of DeepMind at a press conference after the game said that it was the last match at which played AI as this time the competitive program showed "the highest level of a game for AlphaGo". It should be noted that it is probably about AlphaGo participation termination only in competitive matches and such formulation, most likely, does not mean that the program absolutely will cease to play in go.

2016

From March 9 to March 15, 2016 the game AlphaGo with Lee Sedol in Seoul, South Korea is held. 5 batches are played. Prize fund of $1 million. Games were broadcast live on YouTube. AlphaGo won a match with the score 4-1.

Match 1 - Google DeepMind Challenge Match: Lee Sedol vs AlphaG (2016)

2015

In October, 2015 AlphaGo won at Fan Hui, the three-time champion of Europe a match from five batches with the score 5 — 0. It is the case first in the history when the computer won in go against the professional in an equal game. It it was publicly announced in January, 2016 after the publication of article in Nature.

Robotics



Notes