In a joint initiative, Microsoft and OpenAI have successfully halted the activities of five threat actors who were affiliated with various states, namely China, Iran, North Korea, and Russia. The threat actors had been using artificial intelligence tools such as ChatGPT to enhance their cyber attack techniques. The accounts of these actors, which were associated with these tools, have now been shut down.
The two technology companies have observed that over the past year, the speed, scale, and sophistication of cyber attacks have escalated. This is directly linked to the increased development and adoption of AI, which threat actors have been exploiting for their malicious attacks.
Microsoft and OpenAI have carried out a joint study to identify the main emerging threats in the era of AI. They found that cyber attacks often involve the misuse of Large Language Models (LLMs) and fraud. The study primarily focused on newly identified activities associated with known threat actors.
OpenAI revealed in a blog post that they have identified five state-affiliated cybercriminals who pose unique risks to the digital ecosystem and human well-being due to their access to advanced technology, large financial resources, and trained personnel.
As a result of the investigation, both companies have shut down accounts of two actors affiliated with China, known as Charcoal Typhoon and Salmon Typhoon; a threat actor associated with Iran, known as Crimson Sandstorm; an actor affiliated with North Korea, known as Emerald Sleet; and a Russia-related cybercriminal known as Forest Blizzard.
OpenAI details that these threat actors exploited their services, such as ChatGPT, to conduct open-source research, translation of languages, identification of coding errors, and basic coding tasks.
For instance, Charcoal Typhoon, a Chinese state-affiliated threat actor, used LLMs to develop tools for understanding various cybersecurity functions and to generate content for social engineering attacks. This actor has primarily targeted entities in Taiwan, Thailand, Mongolia, Malaysia, France, and Nepal, particularly those who oppose China’s policies.
Another actor, Salmon Typhoon, also associated with the Chinese state, used OpenAI’s LLM for information on potentially sensitive topics and data of high-profile individuals. Salmon Typhoon used OpenAI’s services to translate technical documents, retrieve publicly available information on multiple regional intelligence agencies and threat actors, and investigate ways to hide processes in a system.
Meanwhile, Crimson Sandstorm, affiliated with the Islamic Revolutionary Guard Corps (IRGC), used OpenAI services to support web and application development code. OpenAI has found that this actor used their services to generate content for phishing campaigns and investigate methods to evade detection.
Emerald Sleet, linked to North Korea, utilized OpenAI technologies to identify defense-focused experts and organizations in the Asia-Pacific region, understand existing vulnerabilities, and create content for phishing campaigns. The actor, also known as Kimsuky or Velvet Chollima, has been known to launch attacks posing as renowned academic institutions and NGOs to gather information and expert comments on North Korea’s foreign policies.
The final threat actor, Forest Blizzard, a Russian military intelligence actor, used OpenAI’s services for open-source research on satellite communication protocols and support for scripting tasks.
APPROACH TO ENSURE SAFETY IN THE USE OF AI
In light of these findings, Microsoft and OpenAI have outlined their strategy to ensure the safe use of AI and to detect and block potential threat actors. OpenAI is committed to using AI to prevent its misuse. They plan on investing in technology and personnel to identify and disrupt malicious activities.
Additionally, Microsoft has pledged to notify the relevant service provider immediately if they detect any misuse of AI by a threat actor. OpenAI also collaborates with industry partners and stakeholders to exchange information on identified cases of AI misuse by cybercriminals.
Both companies stressed the importance of transparency and pledged to continue informing users and stakeholders about the extent of AI use by malicious actors.