Tech | Visa | Scholarship/School | Info Place

‘ShadowRay’ vulnerability on Ray Framework exposes thousands of AI workloads, computing power and data

Join us in Atlanta on April 10 to explore the future of a safe workforce. We’ll explore the vision, benefits, and use cases of artificial intelligence for security teams. Request an invitation here.

Thousands of companies use the Ray framework to scale and run highly complex, compute-intensive AI workloads – in fact, you’d be hard-pressed to find a large language model (LLM) that isn’t built with Ray.

These workloads contain large amounts of sensitive data, which researchers found could be highly exposed through critical vulnerabilities (CVEs) in the open source unified computing framework.

New research from Oligo Security shows that the flaw allowed attackers to exploit thousands of companies’ artificial intelligence production workloads, computing power, credentials, passwords, keys, tokens and “vast amounts” over the past seven months. Other sensitive information.

The vulnerability is controversial – meaning it is not considered a risk and does not have a patch. This makes it a “shadow vulnerability,” or a vulnerability that won’t show up in scans. The researchers aptly call it “ShadowRay.”

VB event

Artificial Intelligence Impact Tour – Atlanta

Continuing our tour, we will head to Atlanta for the AI ​​Impact Tour stop on April 10th. This exclusive, invitation-only event in partnership with Microsoft will discuss how generative AI is transforming the security workforce. Space is limited, please request an invitation now.

request an invitation

This marks “the first known instance of an AI workload being actively exploited in the wild via vulnerabilities in modern AI infrastructure,” researchers Avi Lumelsky, Guy Kaplan and Gal Elbaz wrote.

“When an attacker gains access to a Ray production cluster, it’s a jackpot,” they assert. “Valuable corporate data coupled with remote code execution makes it easy to monetize an attack—while remaining in the shadows and completely invisible. Detected (and with static security tools, undetectable).”

Create glaring blind spots

Many organizations rely on Ray to scale and run large, complex AI, data and SaaS workloads, including giants Amazon, Instacart, Shopify, LinkedIn and OpenAI, whose GPT-3 is trained on Ray.

This is because models containing billions of parameters require significant computing power and cannot fit into the memory of a single machine. Maintained by Anyscale, the framework supports distributed workloads for training, serving, and tuning AI models across all architectures. Oligo researchers note that users do not have to be proficient in Python, installation is simple and there are few dependencies.

They ultimately described Ray as a “Swiss Army Knife for Python enthusiasts and AI practitioners.”

But that makes ShadowRay even more worrying. The vulnerability, numbered CVE-2023-48022, is caused by a lack of authorization in the Ray Jobs API. This exposes the API to remote code execution attacks. Researchers say anyone with network access to the dashboard can invoke “arbitrary jobs” without requiring permission.

The vulnerability was disclosed to Anyscale in late 2023 along with four other vulnerabilities, but while all other vulnerabilities were quickly addressed, CVE-2023-48022 was not. Anyscale ultimately disputed the vulnerability, calling it “intended behavior and product functionality” that enables “triggering jobs and executing dynamic code within a cluster.”

Anyscale believes that dashboards should not be internet facing or should only be accessible by trusted parties. The company said Ray was not authorized because it was assumed it would run in a secure environment with “correct routing logic” through network isolation, Kubernetes namespaces, firewall rules or security groups.

Oligo researchers wrote that the decision “underscores the complexity of balancing security and usability in software development” and “highlights the need for careful consideration when implementing changes to critical systems like Ray and other open source components with network access.” importance.”

However, controversial labels make these types of attacks difficult to detect; many scanners simply ignore them. So far, researchers report that ShadowRay does not appear in multiple databases, including Google’s Open Source Vulnerability Database (OSV).Additionally, they are invisible to Static Application Security Testing (SAST) and Software Composition Analysis (SCA)

“This creates a blind spot: security teams around the world are unaware that they may be at risk,” the researchers wrote. At the same time, “AI experts are not security experts—making it possible that they are unaware of the risks that AI frameworks bring There is a real risk.”

From production workloads to OpenAI and Hugging Face tokens

Researchers reported that a large amount of information was leaked due to compromised servers. These include:

  • AI production workloads that allow attackers to compromise the integrity or accuracy of models, or steal or infect models during the training phase.
  • Access cloud environments (AWS, GCP, Azure, Lambda Labs) and sensitive cloud services. This could expose sensitive production data, including complete databases containing customer data, code bases, artifacts, and secrets.
  • Kubernetes API access, which could allow attackers to infect cloud workloads or steal Kubernetes secrets.
  • Passwords and credentials for OpenAI, Stripe and Slack.
  • Production database credentials, potentially allowing threat actors to silently download a complete database. In some cases, attackers can also modify the database or encrypt it using ransomware.
  • An SSH private key that can be used to connect to more machines from the same VM image template. This would allow attackers to gain access to more computing resources for cryptocurrency mining activities.
  • OpenAI tokens, which can be used to access accounts and deplete affected points.
  • Hugging Face token, which can provide access to private repositories and allow threat actors to add and overwrite existing models. These can then be used in supply chain attacks.
  • Stripe tokens that can be used to deplete payment accounts by signing transactions on the live platform.
  • Slack token that can be used to read messages or send arbitrary messages.

The researchers further report that most compromised GPUs are “currently out of stock and difficult to obtain.” Oligo discovered “hundreds” of infected clusters consisting of many nodes. Most of them have GPUs used by attackers for cryptocurrency mining.

“In other words, attackers choose to compromise these machines not only because they can gain access to valuable sensitive information, but also because GPUs are extremely expensive and difficult to obtain, especially now,” the researchers wrote. The annual cost of AWS per machine can be to $858,480.

The attackers had seven months to exploit the hardware, and researchers estimate the total amount of potentially compromised machines and computing power could be as high as $1 billion.

“Attackers are making the same calculations,” they warn.

Revealing shadow vulnerabilities

Oligo researchers admit that “shadow vulnerabilities will always exist” and the signs of exploitation vary – data may be loaded from an untrusted source, firewall rules may be missing, or users may not account for dependent behavior.

They recommend that organizations take a number of actions, including:

  • Always run Ray in a secure and trusted environment.
  • Add firewall rules or security groups to prevent unauthorized access.
  • Continuously monitor production environments and AI clusters for anomalies, even within Ray.
  • If the Ray dashboard really needs to be accessible, implement a proxy that adds an authorization layer.
  • Never trust defaults.

Ultimately, they emphasize: “The technical burden of protecting open source is yours. Don’t rely on maintainers.”

#ShadowRay #vulnerability #Ray #Framework #exposes #thousands #workloads #computing #power #data

Leave a Reply

Your email address will not be published. Required fields are marked *