OpenAI denies hackers access to ChatGPT

OpenAI denies hackers access to ChatGPT
OpenAI denies hackers access to ChatGPT

OpenAI developers have prevented the use of their products by cybercriminals associated with the governments of various countries.

Together with Microsoft Threat Intelligence, they discovered and deleted five accounts belonging to attackers. Among them:

  • Chinese government-linked hackers Charcoal Typhoon and Salmon Typhoon;
  • Crimson Sandstorm (Iran);
  • Emerald Sleet (DPRK);
  • Forest Blizzard (RF).

The investigation revealed that Charcoal Typhoon used AI to explore various cybersecurity tools, debug code, and create content for phishing campaigns.

Salmon Typhoon translated technical documents, studied the activities of intelligence agencies and regional threat actors, and also figured out ways to hide malicious processes in the system.

Crimson Sandstorm created phishing applications and websites using ChatGPT.

Emerald Sleet, in addition to writing malicious code, sought out experts and organizations involved in defense issues in the Asia-Pacific region.

Forest Blizzard requested information on satellite communications protocols and radar imaging technology.

As a result of an internal investigation, OpenAI found that the GPT-4 version of the chatbot offers only limited capabilities for performing malicious tasks. However, the developers intend to improve the security of their products by studying real-life experiences of their use in cybercriminal activities and introducing enhanced monitoring of user interaction with the platform. 

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow