HACKER & TECHNOLOGY NEWS

Investigators Discover ‘LLMjacking’ Plot Aimed at Cloud-Hosted AI Models

Researchers studying cybersecurity have uncovered a brand-new attack that targets cloud-hosted large language model (LLM) services using credentials that have been stolen, with the intention of selling access to other threat actors.

The Sysdig Threat Research Team has dubbed this attack method LLMjacking.

“Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers,” Alessandro Brucato, a security researcher, said “In this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted.”
The intrusion pathway used to pull off the scheme entails breaching a system running a vulnerable version of the Laravel Framework (e.g., CVE-2021-3129), followed by getting hold of Amazon Web Services (AWS) credentials to access the LLM services.

Online safety
An open-source Python script that verifies and examines keys for a variety of services from Anthropic, AWS Bedrock, Google Cloud Vertex AI, Mistral, and OpenAI, among others, is one of the instruments utilized.

“No legitimate LLM queries were actually run during the verification phase,” said Brucato. “Instead, just enough was done to figure out what the credentials were capable of and any quotas.”

The fact that the keychecker is integrated with oai-reverse-proxy, an additional open-source application that serves as a reverse proxy server for LLM APIs, suggests that the threat actors are probably granting access to the compromised accounts without really disclosing the underlying credentials.

“If the attackers were gathering an inventory of useful credentials and wanted to sell access to the available LLM models, a reverse proxy like this could allow them to monetize their efforts,” Brucato stated.

In addition, it appears that the attackers are trying to evade detection by querying the logging settings when they run their prompts using the compromised credentials.

This breakthrough marks a shift from assaults that concentrate on model poisoning and prompt injections, enabling attackers to profit from their access to the LLMs while the cloud account owner pays the bill without their knowledge or agreement.

According to Cybersecurity Sysdig, a victim of this kind of attack could end up paying over $46,000 in LLM usage fees every day.

“The use of LLM services can be expensive, depending on the model and the amount of tokens being fed to it,” Brucato said. “By maximizing the quota limits, attackers can also block the compromised organization from using models legitimately, disrupting business operations.”

Organizations are recommended to enable detailed logging and monitor cloud logs for suspicious or unauthorized activity, as well as ensure that effective vulnerability management processes are in place to prevent initial access.

Leave a Reply

Your email address will not be published. Required fields are marked *

Verified by MonsterInsights