News
Microsoft and OpenAI Collaborate on AI Security, Block State-Sponsored Attackers
Microsoft and OpenAI have shut down five state-sponsored hacking groups that were using OpenAI's large language models (LLMs) "in support of malicious cyber activities," per announcements this week.
The state-sponsored groups apparently are testing ways to use generative AI and LLMs for their attacks. They were using OpenAI technology "for querying open-source information, translating, finding coding errors, and running basic coding tasks," according to an OpenAI blog post.
The activities of these groups hadn't resulted in major LLM-driven attacks, though. "Importantly, our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely," Microsoft indicated in a report.
Five-Group Shutdown
Microsoft and OpenAI are very close AI technology partners, as well as security partners. Working with OpenAI, Microsoft's Threat Intelligence group identified the five following state-sponsored cyberattack groups that were using generative AI LLMs:
- Forest Blizzard, a Russian military-backed group known to target organizations related to Russia's ongoing war with Ukraine. Per Microsoft, "Forest Blizzard's use of LLMs has involved research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine, as well as generic research aimed at supporting their cyber operations."
- Emerald Sleet, a North Korean spear-phishing group known to impersonate universities and nonprofits to extract intelligence from foreign policy experts. The group used LLMs to research potential targets, as well as "to understand publicly known vulnerabilities, to troubleshoot technical issues, and for assistance with using various web technologies."
- Crimson Sandstorm, an Iranian group that specializes in delivering .NET malware to targets in the defense, maritime shipping, health care and other industries. The group is known to use LLMs to "[request] support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine."
- Charcoal Typhoon, a group affiliated with the Chinese government that has been known to target organizations and individuals that are deemed oppositional to Chinese government policies. This group has been found to use LLMs "to support tooling development, scripting, understanding various commodity cybersecurity tools, and for generating content that could be used to social engineer targets."
- Salmon Typhoon, also affiliated with China, is known to be proficient at disseminating malware to U.S. government agencies and defense contractors. In the past year, researchers observed this group using LLMs in an "exploratory" way, suggesting that "it is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs."
OpenAI has disabled all accounts related to each of these groups, but the company argued that its platform would not have given these groups a noteworthy advantage, even if their actions had led to material attacks. "GPT-4 offers only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools," it indicated.
Hallmarks of an LLM-Based Attack Strategy
Just as IT professionals are looking for ways to harden their security postures using AI, threat actors are turning to AI to facilitate and improve their attacks. As Microsoft notes, a typical malicious attack strategy requires reconnaissance, coding and proficiency with the targets' native languages -- all of which are tasks that can be expedited with AI.
Microsoft shared a list of nine common LLM-related attack tactics, techniques and procedures (TTPs) used by nation-state groups. They are as follows:
- LLM-informed reconnaissance: Employing LLMs to gather actionable intelligence on technologies and potential vulnerabilities.
- LLM-enhanced scripting techniques: Utilizing LLMs to generate or refine scripts that could be used in cyberattacks, or for basic scripting tasks such as programmatically identifying certain user events on a system and assistance with troubleshooting and understanding various web technologies.
- LLM-aided development: Utilizing LLMs in the development lifecycle of tools and programs, including those with malicious intent, such as malware.
- LLM-supported social engineering: Leveraging LLMs for assistance with translations and communication, likely to establish connections or manipulate targets.
- LLM-assisted vulnerability research: Using LLMs to understand and identify potential vulnerabilities in software and systems, which could be targeted for exploitation.
- LLM-optimized payload crafting: Using LLMs to assist in creating and refining payloads for deployment in cyberattacks.
- LLM-enhanced anomaly detection evasion: Leveraging LLMs to develop methods that help malicious activities blend in with normal behavior or traffic to evade detection systems.
- LLM-directed security feature bypass: Using LLMs to find ways to circumvent security features, such as two-factor authentication, CAPTCHA, or other access controls.
- LLM-advised resource development: Using LLMs in tool development, tool modifications, and strategic operational planning.