Nation-state threat actors backed by the governments of China, Iran, North Korea and Russia are exploiting the large language models (LLMs) used by generative AI services such as OpenAI’s ChatGPT, but has not yet been used in any significant cyber attacks, according to the Microsoft Threat Intelligence Center (MSTIC)
Researchers at the MSTIC have been working hand-in-hand with OpenAI – with which Microsoft has a longstanding and occasionally controversial multibillion dollar partnership – to track various adversary groups and share intelligence on threat actors, and their emerging tactics, techniques and procedures (TTPs). Both organisations are also working with MITRE to integrate these new TTPs into the MITRE ATT&CK framework and the ATLAS knowledge base.
Over the past few years, said MSTIC, threat actors have been closely following developing trends in tech in parallel with defenders, and like defenders they have been looking at AI as a method of enhancing their productivity, and exploit platforms like ChatGPT that could be helpful to them.
“Cyber crime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” the MSTIC team wrote in a newly-published blog post detailing their work to date.
“On the defender side, hardening these same security controls from attacks and implementing equally sophisticated monitoring that anticipates and blocks malicious activity is vital.”
The team said that while different threat actors motives and sophistication vary, they do have common tasks, such as reconnaissance and research, coding and malware development, and in many cases, learning English. Language support in particular is emerging as a key use case to assist threat actors with social engineering and victim negotiations.
However, said the team, at the time of writing, this is about as far as threat actors have gone. They wrote: “Importantly, our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely.”
They added: “While attackers will remain interested in AI and probe technologies’ current capabilities and security controls, it’s important to keep these risks in context. As always, hygiene practices such as multifactor authentication (MFA) and Zero Trust defences are essential because attackers may use AI-based tools to improve their existing cyber attacks that rely on social engineering and finding unsecured devices and accounts.”
What have they been doing?
The MSTIC has today shared details of the activities of five nation-state advanced persistent threat (APT) groups that it has caught red handed playing around with ChatGPT, one each from Iran, North Korea, Russia, and two from China.
The Iranian APT, Crimson Sandstorm (aka Tortoiseshell, Imperial Kitten, Yellow Liderc), which is linked to Tehran’s Islamic Revolutionary Guard Corps (IRGC), targets multiple verticals with watering hole attacks and social engineering to deliver custom .NET malware.
Some of its LLM-generated social engineering lures have included phishing emails purporting to be from a prominent international development agency, and another campaign which attempted to lure feminist activists to a fake website.
It also used LLMs to generate code snippets to support the development of applications and websites, interact with remote servers, scrape the web, and execute tasks when users sign in. It also attempted use LLMs to develop code that would enable it to evade detection, and to learn how to disable antivirus tools.
The North Korean APT, Emerald Sleet (aka Kimsuky, Velvet Chollima), favours spear-phishing attacks to gather intelligence from experts on North Korea, and often masquerades as academic institutions and NGOs to lure them in.
Emerald Sleet has been using LLMs largely in support of this activity, as well as research into thinktanks and experts on North Korea, and generation of phishing lures. It has also been seen interacting with LLMs to understand publicly-disclosed vulnerabilities – notably CVE-2022-30190, aka Follina, a zero-day in Microsoft Support Diagnostic Tool – to troubleshoot technical problems, and to get help using various web technologies.
The Russian APT, Forest Blizzard (aka APT28, Fancy Bear), which operates on behalf of Russian military intelligence through GRU Unit 26165, has been actively using LLMs in support of cyber attacks on targets in Ukraine.
Among other things, it has been caught using LLMs to satellite communications and radar imaging technologies that may relate to conventional military operations against Ukraine, seek assistance with basic scripting tasks, including file manipulation, data selection, regular expressions and multiprocessing. MSTIC said this may be an indication that Forest Blizzard is trying to work out how to automate some of its work.
The two Chinese APTs are Charcoal Typhoon (aka Aquatic Panda, ControlX, RedHotel, Bronze University) and Salmon Typhoon (aka APT4, Maverick Panda).
Charcoal Typhoon has a broad operational scope targeting multiple key sectors such as government, communications, fossil fuels, and information technology, in Asian and European countries, whereas Salmon Typhoon tends to go for US defence contractors, government agencies, and cryptographic technology specialists.
Charcoal Typhoon has been observed using LLMs to explore augmenting its technical nous, looking for help in tooling development, scripting, understanding commodity cyber security tools, and generating social engineering lures.
Salmon Typhoon is also using LLMs in an exploratory way, but has tended to try to use them to source information on sensitive geopolitical topics of interest to China, high-profile individuals, and US global influence and internal affairs. However, on at least one occasion it also tried to get ChatGPT to write malicious code – MSTIC noted that the model declined to help with this, in line with its ethical safeguards.
All of the observed APTs have had their accounts and access to ChatGPT suspended.
Reaction
Commenting on the MSTIC – OpenAI research, Neil Carpenter, principle technical analyst at Orca Security, said the most important takeaway for defenders is that while nation-state adversaries are interested in LLMs and generative AI, they are still in the early stages and their interest has not yet resulted in any novel or advanced techniques.
“This indicates that organisations who are focused on existing best practices in defending their assets and detecting and responding to potential incidents are well positioned; additionally, organisations that are pursuing advanced approaches like zero-trust will continue to benefit from these investments,” Carpenter told Computer Weekly in emailed comments
“Generative AI approaches can definitely help defenders in the same ways that Microsoft describes threat actors using them; to operate more efficiently. For instance, in the case of the currently-exploited Ivanti vulnerabilities, AI-powered search allows defenders to rapidly identify the most critical, exposed, and vulnerable assets even if initial responders lack specialist knowledge of domain-specific languages used in their security platforms,” he added.
Comentarios recientes