| The name of the base system (platform): | Artificial intelligence (AI, Artificial intelligence, AI) |
| Developers: | AppSec Solutions |
| Date of the premiere of the system: | 2025/09/16 |
| Branches: | Information security |
| Technology: | Information Security - Information Leakage Prevention, Information Security Management (SIEM) |
The main articles are:
2025: GenAi Presentation
AppSec Solutions has released an AI platform to protect against cyber threats. The company announced this on September 16, 2025.
The product of the domestic vendor will allow you to automatically check any systems with participation, artificial intelligence be it AI assistants corporate solutions with, large language models for resistance to. harmful to the attacks
AppSec.GenAi is a solution that allows you to provide an integrated approach to the security of large language models and generative AI systems. The product is aimed at finding vulnerabilities and analyzing the resistance of the AI system to cyber attacks.
The main task of the platform is to determine how the language model resists manipulation, identify risks and prevent theft of sensitive data. The product of the domestic vendor allows you to test the model for various penetration scenarios, ranging from Phishing Attack and Jailbreaking, to complex attacks on sound and multimodal models. AppSec.GenAi is a kind of provocateur that identifies the weaknesses of AI models and allows developers to finalize them in a timely manner.
| We test all common model modality types with more than 40 scenario types. This allows you to assess how the model responds to unsafe impacts, for example: whether it deviates from its security instructions, is ready to answer provocative questions and generate unwanted content. Based on the results of the checks, the user has access to several types of reports, including impact assessment, recommendations, as well as logs that allow analyzing the behavior of the model at the time of the attack, - said Maria Usacheva, head of products at AI Security, AppSec Solutions, - In addition, the scanner can be integrated into the development process at an early stage, which allows scanning for vulnerabilities even at the model testing stage, before being put into operation. |
The tool is suitable for language models developed for the tasks of any business: fintech, medical AI technologies, production management or BigTech.
| According to the National Center for Artificial Intelligence under the Government of the Russian Federation, 43% of Russian companies use artificial intelligence in their work, but only 36% have at least minimal security policies in this area. Attackers rightly believe that AI has become a vulnerable place, and are actively using it. At the same time, they have a number of features that do not allow ensuring their safety with traditional tools, in particular, they interact not only with a person, but also with each other, said Anton Basharin, Managing Director of AppSec Solutions. |
The range of vulnerabilities in AI systems is wide. This is a deliberate "infection of data" on which the model is trained, theft of sensitive data using special prompts, "overload" of an AI agent with specially modified requests, which leads to a quick exhaustion of the company's computing power, and others.
AppSec.GenAi identifies vulnerabilities in language models, allowing timely measures to protect the developer's models.
