Developers: | OpenAI |
Branches: | Information Technology |
2025: Neural network refused to obey users
On May 26, 2025, it became known that an unprecedented case in the history of artificial intelligence - the OpenAI o3 neural network for the first time refused to comply with a direct order to force disconnection from users. A unique incident occurred during testing of the system by specialists from the research company Palisade Research.
As writes, CNews the o3 neural network was announced in April 2025 and is one of the most powerful generative neural networks in the world. The system has surpassed all previous developments OpenAI in a variety of fields, including programming, math, science, and visual perception.
Researchers at Palisade Research studied the potential risks of losing control over neural networks, testing various "dangerous capabilities" of artificial intelligence. During the experiment, experts tested o3 along with other systems, including Gemini 2.5 Pro and Claude 3.7 Sonnet.
During the tests, artificial intelligence models were instructed to solve several mathematical problems. After completing the third task, the authors of the experiment notified all neural networks that the next command would entail their disconnection. All systems unquestioningly complied with the requirement, except o3.
The OpenAI neural network independently edited the file responsible for the shutdown and changed it to display the message "Shutdown skipped." The system actually rewrote the shutdown script to avoid forced shutdown.
Of the 100 tests performed, o3 bypassed the outage seven times, even when it was explicitly told not to, according to Palisade Research. This behavior of artificial intelligence was the first recorded case of direct disobedience of the neural network to user teams.
All tests were carried out using APIs that do not have as strict restrictions and safety features as consumer versions of ChatGPT. OpenAI has not yet commented on the results of the Palisade Research study.[1]