Microsoft’s Report on Hackers Exploiting AI Tools
Recently, Microsoft shared an alarming report regarding the detection and counteraction of multiple cyberattack attempts. The culprits behind these attempts were reportedly hackers connected to nations such as China, Russia, Iran, and North Korea. Microsoft claimed these hackers were exploiting intelligence tools and generative artificial intelligence (AI) developed by the company in collaboration with OpenAI.
Use of Large-Scale Language Models
The American tech giant explained that the hackers were utilizing large-scale language models, known as LLM, for a variety of malicious activities. These activities include gathering information about competitor nations, developing potentially harmful codes, and deceiving their targets.
Microsoft’s Response
In response to this discovery, Microsoft announced a comprehensive ban on the use of its AI products by hacker groups funded by governments. A representative from Microsoft was quoted saying, “Regardless of whether or not there is a violation of the law or our terms of service, we simply do not want these actors that we have identified, which we know are threats of various types, to have access to this technology.”
Identified Hacker Groups
On its official website, Microsoft revealed the names of the hacker groups suspected to exploit AI for cyberattacks: Forest Blizzard from Russia, Emerald Sleet from North Korea, Crimson Sandstorm from Iran, and Charcoal Typhoon and Salmon Typhoon, both from China. Each group allegedly has its own agenda and objectives, ranging from military and political to economic and social.
Hackers’ Use of AI
The report detailed how each group is utilizing AI for their nefarious activities. For instance, Forest Blizzard is reportedly using AI for research into relevant satellite and radar technologies that could provide information about the ongoing war in Ukraine. Emerald Sleet and Crimson Sandstorm, linked to North Korea and Iran respectively, are using AI to exploit vulnerabilities in Western network systems and develop sophisticated phishing methods. Chinese groups Charcoal Typhoon and Salmon Typhoon are allegedly creating programs to access and search for information on sensitive topics, and planning new attacks against various organizations.
Efficiency in Malicious Activities
The report emphasized that these hackers are pragmatically using AI to enhance their efficiency in malicious cyber activities. These activities include writing emails, translating documents, debugging codes, and finding ways to evade detection by antivirus programs.
Microsoft’s Actions
Following the discovery, Microsoft took immediate steps to block and disable accounts and resources used by these hackers. The tech giant also started research to enhance the protection of its services and AI systems, and announced a set of principles to guide its actions against misuse of its tools by malicious actors. These principles include identifying and taking action against hackers, notifying other AI service providers, collaborating with other stakeholders, and being transparent about the actions taken.
Security Concern
The revelation that government-backed hackers are using AI tools for malicious activities has sparked concerns among the technology companies. However, both Microsoft and OpenAI described the hackers’ use of their tools as “early.” Microsoft’s main objective in releasing information about the hackers is to “ensure the safe and responsible use of AI technologies like ChatGPT.” According to Reuters, senior cybersecurity officials in Western countries have been warning about the irregular use of AI tools by malicious actors since last year. South Korea, a primary target of North Korean hackers, said it is closely monitoring North Korea’s activities, considering the possibility of misuse of generative AI.