Developers: | Microsoft |
Date of the premiere of the system: | May 2021 |
Branches: | Information technologies |
2021: Open Source Code
In early May 2021, Microsoft released Counterfit open source software to help developers assess the security of their machine learning systems.
The project laid out on GitHub includes a command-line tool and an automation system that allows developers to simulate artificial intelligence cyber attacks.
Microsoft itself uses Counterfit to test its own AI models, and also plans to use the product to develop artificial intelligence systems.
By May 5, 2021, anyone can download this tool and deploy it through Azure Shell to run in a browser or locally in Anaconda Python.
Counterfit is able to evaluate the protection of artificial intelligence models located in various cloud, local and peripheral environments. Microsoft claims that the tool works with any AI models and supports various types of data, including text and images.
Our tool makes published attack algorithms available to the security community and helps provide an extensible interface for creating, managing, and launching AI model attacks. This tool has become part of Microsoft's extensive work to expand the capabilities of developers in terms of secure development and deployment of AI systems, Microsoft said. |
The company listed three main scenarios for the use of Counterfit by cybersecurity specialists:
- testing the AI system for unauthorized penetration;
- scanning AI systems for vulnerabilities
- documenting cyber attacks on AI models.[1]
The tool comes with pre-loaded attack algorithms. IB specialists can also use the built-in cmd2 script engine to connect to Counterfit from existing attacking testing tools.