OpenAI Cuts AI Testing Due to Competition
OpenAI, the American company behind ChatGPT, is cutting back on the time and resources it spends on testing the safety of new AI models, the Financial Times reported , citing insiders. Experts fear that the pursuit of market leadership is forcing the company to accelerate product releases, increasing the risks of technology abuse.

Previously, testing took up to six months — GPT-4, for example, was tested for six months before its release in 2023. Now, employees and independent experts are given just a few days to assess the risks of new models, such as the expected o3. Some testers have been given less than a week to test, which reduces the quality of threat analysis.
The main reason for the acceleration is pressure from competitors: Meta (banned in Russia), Google, and startups like Elon Musk's xAI. There are no uniform safety standards for AI yet, but from the end of 2025, the EU will pass a law requiring testing of powerful models.
"There are no laws requiring (companies) to inform the public about all the scary capabilities... also they (companies) are under a lot of pressure to compete with each other, so they are not going to stop improving their capabilities."
Daniel Kokotailo, former OpenAI developer
In April 2024, OpenAI and other companies signed voluntary agreements with the US and UK to allow their AI to be tested. However, the current pace of development raises questions about whether the company will be able to identify dangerous use cases for its technology in time.
What's Your Reaction?






