Microsoft, OpenAI: US Adversaries Armed with GenAI

The companies say “early stage” efforts to leverage generative AI have been detected as nation-state attackers take aim at US interests.

Shane Snider , Senior Writer, InformationWeek

February 15, 2024

3 Min Read
global hacking concept
jvphoto via Alamy Stock

Microsoft and OpenAI say Iran, North Korea, Russia, and China have started arming their US cyberattack efforts with generative artificial intelligence (GenAI).

The companies said in a blog post on Microsoft’s website Wednesday that they jointly detected and stopped attacks using their AI technologies. The companies listed several examples of specific attacks using large language models to enhance malicious social engineering efforts -- leading to better deepfakes and voice cloning attempting to crack US systems.

Micosoft said North Korea’s Kimsuky cyber group, Iran’s Revolutionary Guard, Russia’s military, and a Chinese cyberespionage called Aquatic Panda, all used the companies’ large language model tools for potential attacks and malicious activity. The attack from Iran included phishing emails “pretending to come from an international development agency and another attempting to lure prominent feminists to an attacker-built website on feminism.”

Cyberattacks from foreign adversaries have been steadily increasing in severity and complexity. This month, the Cybersecurity and Infrastructure Agency (CISA) said China-backed threat actor Volt Typhoon targeted several western nations’ critical infrastructure and have had access to the systems for at least five years. Experts fear such attacks will only increase in severity as nation-states use GenAI to enhance their efforts.

Related:Firms Arm US Against AI Cyberattacks

New Threats, Familiar Defenses

Nazar Tymoshyk, CEO at cybersecurity firm UnderDefense, tells InformationWeek in a phone interview that even as threats become more sophisticated through GenAI, the fundamentals for cybersecurity should stay the same. The onus for safeguarding, he said, is on the company producing AI. “Every product is AI-enabled, so it’s now a feature in every program,” he says. “It becomes impossible to distinguish between what’s an AI attack. So, it’s the company who is responsible to put additional controls in place.”

Microsoft called the attack attempts “early stage,” and “our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely. At the same time we feel this is important research to expose early stage, incremental moves that we observe well-known threat actors attempting, and share information on how we are blocking and countering them with the defender community.”

The companies say hygiene practices like multifactor authentication and zero-trust defenses are still vital weapons against attacks -- AI-enhanced or not. “While attackers will remain interested in AI and probe technologies’ current capabilities and security controls, it’s important to keep these risks in context.”

Related:What CISOs Need to Know About Nation-State Actors

In a separate blog post, OpenAI says it will continue to work with Microsoft to identify potential threats using GenAI models.

“Although we work to minimize potential misuse by such actors, we will not be able to stop every instance. But by continuing to innovate and investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else.”

OpenAI declined to make an executive available for comment.

Multiple Fronts in AI Battle

While Microsoft and OpenAI’s report was focused on how threat actors are using AI tools for attacks, AI can also be a vector for attack. That’s an important thing to remember with businesses implementing GenAI tools at a feverish pace, Chris “Tito” Sestito, CEO and co-founder of adversarial AI firm HiddenLayer tells InformationWeek in an email.

“Artificial intelligence is, by a wide margin, the most vulnerable technology ever to be deployed in production systems,” Sestito says. “It’s vulnerable at a code level, during training and development, post-deployment, over networks, via generative outputs and more. With AI being rapidly implemented across sctors, there has also been a substantial rise in intentionally harmful attacks – providing why defensive solutions to secure AI are needed.”

Related:Microsoft ID’s Russia-Backed Actor Behind Leadership Email Hacks

He adds, “Security has to maintain pace with AI to accelerate innovation. That’s why it’s imperative to safeguard your most valuable assets from development to implementation… companies must regularly update and refine their AI-specific security program to address new challenges and vulnerabilities.

About the Author(s)

Shane Snider

Senior Writer, InformationWeek, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights