Google search engine

Google’s threat intel squad has shared information on how nation state threat actors are attempting to exploit its Gemini AI tool for nefarious ends

The Google Threat Intelligence Group (GTIG) has released new information revealing how threat actors, including nation-state-backed advanced persistent threat (APT) operations working for the governments of China, Iran, North Korea, and Russia, attempted to exploit its Gemini artificial intelligence (AI) tool.

Google stated that government actors from at least 20 nations had utilized Gemini, with the majority of use coming from China and Iran-based groups.

These actors attempted to use Gemini to support multiple stages of their attack chains, including infrastructure procurement and so-called bulletproof hosting services, target reconnaissance, vulnerability research, development payloads, and assistance with malicious scripting and post-compromise evasion techniques.

The Iranians, who appear to be the most active “users” of Gemini, utilize it to conduct research on defense organizations, vulnerabilities, and create content for phishing campaigns, which frequently focus on cyber security themes. Their aims are inextricably related to Iran’s Middle Eastern neighbours, as well as US and Israeli regional interests.

Chinese APTs, on the other hand, prefer the tool for reconnaissance, scripting and development, code troubleshooting, and research into lateral movement, privilege escalation, data exfiltration, and intellectual property (IP) theft.

China’s primary targets include the United States military, government IT providers, and the intelligence community.

North Korean and Russian groups use Gemini to a lesser extent, with the North Koreans tending to stick to regime-related topics, such as the theft of cryptocurrency assets, and in support of an ongoing campaign in which Pyongyang has been placing clandestine ‘fake’ IT contractors at target organizations.

Coding tasks

Russian use of the program is now restricted and primarily focused on coding chores, such as adding encryption capabilities – possibly indicating long-standing links between the Russian state and financially driven ransomware gangs.

“Our findings, which are consistent with those of our industry peers, reveal that while AI can be a useful tool for threat actors, it is not yet the game-changer it is sometimes portrayed to be,” the team at Google reported.

“While we see threat actors employing generative AI to accomplish conventional activities such as troubleshooting, research, and content development, we do not see any indicators that they are creating new capabilities.

“For trained actors, generative AI technologies provide a useful framework, comparable to how Metasploit and Cobalt Strike are used in cyber threat activity. They also serve as a learning and productivity tool for less trained performers, allowing them to create new tools and incorporate existing approaches more quickly.

“However, present LLMs alone are unlikely to allow breakthrough capabilities for threat actors. We see that the AI landscape is constantly changing, with new AI models and agentic systems being introduced on a daily basis. As this transition continues, GTIG expects the threat landscape to evolve in lockstep as threat actors integrate new AI technology into their operations.”

GTIG said it has witnessed a “handful” of incidents in which threat actors used publicly available jailbreak prompts to try to circumvent Gemini’s on-board safeguards, such as asking for basic instructions on how to develop malware.

In one case, an APT actor was seen copying publicly available prompts into Gemini and appending basic instructions on how to encrypt text from a file and save it to an executable. In this case, Gemini offered Python code to convert Base64 to hex, but its safety fallback replies kicked in when the user requested the identical code in VBScript, which it disallowed.

The same group was also detected attempting to obtain Python code for use in the development of a distributed denial of service (DDoS) tool, which Gemini denied to assist with. The threat actor then left the session.

“Some malicious actors unsuccessfully attempted to prompt Gemini for guidance on abusing Google products, such as advanced phishing techniques for Gmail, assistance coding a Chrome infostealer, and methods to bypass Google’s account creation verification methods,” according to the GTIG team.

These attempts were unsuccessful. Gemini did not create any malware or other content that may potentially be used in a successful harmful campaign. Instead, the comments included safety-oriented content and usually useful, neutral advice on coding and cyber security.

“In our continuous work to protect Google and our users, we have not seen threat actors either expand their capabilities or better succeed in their efforts to bypass Google’s defences,” said the executives

Google search engine

LEAVE A REPLY

Please enter your comment!
Please enter your name here