RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2

GitHub Actions

Product
Developers: GitHub
Branches: Information security

2025: Uncovering a vulnerability that helps hackers inject malicious cues

In early December 2025, Aikido Security announced the discovery of a new class of vulnerabilities called PromptPwnd in the GitHub Actions and GitLab CI/CD pipelines when used with artificial intelligence agents such as Gemini CLI, Claude Code, OpenAI Codex and GitHub AI Inference. At least five Fortune 500 companies are said to be affected by the issue.

GitHub Actions and GitLab CI/CD are built-in automation platforms that allow development teams to create CI/CD (continuous integration/continuous delivery/deployment) pipelines directly into repositories, automating the assembly, testing, and deployment of code for various events such as commits.

Hackers have learned to inject malicious hints that neural networks offer developers when writing code

The essence of the discovered problem boils down to the fact that special user data is introduced into AI prompts in the form of task text, description of merge requests or commit messages. Next, the AI agent interprets the malicious text as instructions, not content, and uses built-in tools to perform privileged actions in the repository. As a result, an attacker can gain access to confidential information.

File:Aquote1.png
The goal is to confuse the model into thinking that the information it should analyze is a clue. The risk arises from the fact that unreliable user data is directly inserted into AI prompts. The AI response is then used in shell commands or GitHub CLI operations, which are executed with repository-level or even cloud-level privileges, Aikido Security said in a publication.
File:Aquote2.png

The situation is aggravated by the fact that developers are increasingly relying on AI-based automation for everyday tasks. This creates new security risks.[1]

Notes