RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2

Adversarial Robustness Toolbox

Product
Developers: IBM
Date of the premiere of the system: April, 2018
Branches: Information technologies

2018: The announcement of the tool for protection of AI systems against competitive training

In April, 2018 IBM released as the company claims, tools, the first in the market, for protection of the systems of the artificial intelligence (AI) against the attacks by means of competitive training. The product received the name Adversarial Robustness Toolbox.

It represents the program library open source written on Python — the most popular programming language used for development, testing and deployment of deep neural networks.

Example of use of competitive training when on the picture add at the left noise then the neuronet begins to see on the image of a coat with a hood instead of a panda

The tool is intended for creation of new practical methods of protection and their implementation in the commercial AI systems. Researchers can use it for testing of new security technologies according to modern requirements.

Provides to Adversarial Robustness Toolbox developers the interfaces providing a set of end-to-end systems of protection using individual methods as construction blocks, says IBM.

The library contains the most modern algorithms for creation of competitive examples and also methods of fight of neuronets with them.

Competitive training assumes training of two contradictory neural networks. One network, for example, generates video, and another looks for differences between the real and generated video. Over time the generator learns to deceive the recognizer and to create the videos reminding, for example, scenes from beaches at stations, hospitals and golf courses.

The attacks using competitive training represent serious threat at start of AI systems in places where security is very important. The malefactor can deceive the computer, having forced it it is wrong to distinguish video, a photo or the speech, and to the person at a look or hearing impracticablly to distinguish counterfeit content from the present. Thus it is possible to deceive the systems of face recognition or to break work of unmanned vehicles by blocking of recognition of road signs.[1]

Notes