Skip to content
Home Why adversarial AI is a cyberthreat no one saw coming

Why adversarial AI is a cyberthreat no one saw coming

  • by

Join Gen AI business leaders in Boston on March 27 for an exclusive evening of networking, insights, and conversation around data integrity. Request an invitation here.


According to a recent report, security leaders’ intentions are not aligned with their actions to protect AI and MLOps.

An overwhelming majority of IT leaders (97%) say securing AI and securing systems is critical, but only 61% are confident they will get the funding they need. While the majority of IT leaders interviewed (77%) said they have experienced some form of AI-related vulnerability (not specifically to models), only 30% have deployed it in their existing AI development (including MLOps pipelines) Manual defense against adversarial attacks.

Only 14% are planning and testing such attacks. Amazon Web Services defines MLOps as “a set of practices that automate and simplify machine learning (ML) workflows and deployments.”

IT leaders are increasingly relying on AI models, making them an attractive attack surface for a variety of adversarial AI attacks.

VB event

Artificial Intelligence Impact Tour – Atlanta

Continuing our tour, we will head to Atlanta for the AI ​​Impact Tour stop on April 10th. This exclusive, invitation-only event in partnership with Microsoft will discuss how generative AI is transforming the security workforce. Space is limited, please request an invitation now.

request an invitation

IT leaders’ companies have an average of 1,689 models in production, and 98% of IT leaders believe some of their AI models are critical to their success. Eighty-three percent use it universally across all teams within their organization. “The industry is working to accelerate the adoption of artificial intelligence without taking appropriate property security measures,” the report’s analysts wrote.

HiddenLayer’s AI Threat Landscape Report provides a critical analysis of the risks faced by AI-based systems and the progress made in securing AI and MLOps pipelines.

Defining adversarial artificial intelligence

The goal of adversarial AI is to deliberately mislead AI and machine learning (ML) systems, rendering them worthless for the purpose for which they were designed. Adversarial AI refers to “the use of artificial intelligence technology to manipulate or deceive artificial intelligence systems. This is like a cunning chess player who exploits the weaknesses of his opponent. These smart opponents can bypass traditional network defense systems and use complex algorithms and techniques to evade detection and launch targeted attacks.”

HiddenLayer’s report defines three broad categories of adversarial AI, defined as follows:

Adversarial machine learning attacks. Such attacks are designed to exploit vulnerabilities in algorithms, with goals including modifying the behavior of broader AI applications or systems, evading detection by AI-based detection and response systems, or stealing the underlying technology. Nation-states conduct espionage for economic and political gain, seek to reverse engineer models to obtain model data, and weaponize models for their own use.

Generative AI system attacks. The targets of these attacks often focus on targeted filters, guardrails, and restrictions designed to protect generative AI models, including every data source and large language model (LLM) they rely on. Nation-state attacks continue to weaponize LL.M.s, VentureBeat has learned.

Attackers see it as their bet that bypassing content restrictions would give them the freedom to create prohibited content that the model would otherwise block, including deepfakes, misinformation, or other types of harmful digital media. Gen AI system attacks are a favorite among nation-states seeking to influence elections in the United States and other democracies around the world.this 2024 U.S. Intelligence Community Annual Threat Assessment found that “China has demonstrated increased levels of sophistication in its influence activities, including attempts to generate artificial intelligence” and that “the People’s Republic of China (PRC) may attempt to influence the 2024 U.S. election to some extent as it wishes to Marginalizing” criticizes China and amplifies divisions in American society. “

MLOps and software supply chain attacks. These are often the actions of nation-states and large electronic crime syndicates aimed at dismantling the frameworks, networks and platforms on which AI systems are built and deployed. Attack strategies include Targeting the components used in the MLOps pipeline to introduce malicious code into the AI ​​system. The poisoned dataset is delivered via software packages, arbitrary code execution, and malware delivery techniques.

Four ways to defend against adversarial AI attacks

The wider the gap between DevOps and CI/CD pipelines, the more vulnerable AI and ML model development becomes. Protecting models remains an elusive and ever-evolving goal, made even more challenging by the weaponization of new generations of artificial intelligence.

However, these are just some of the many steps organizations can take to defend against adversarial AI attacks. They include the following:

Make red teaming and risk assessment part of the organization’s muscle memory or DNA. Don’t settle for red teaming only occasionally, or worse, red teaming only when an attack triggers a new sense of urgency and vigilance. From now on, red teaming needs to be part of the DNA of any DevSecOps that supports MLOps. The goal is to preemptively identify weaknesses in the system and any pipelines, and work to prioritize and harden any attack vectors that arise as part of the MLOps System Development Life Cycle (SDLC) workflow.

Stay up to date and adopt the AI ​​defense framework that works best for your organization. Keep members of your DevSecOps team up to date on the many defense frameworks available today. Knowing which one best suits your organization’s goals can help ensure MLO, save time, and secure the broader SDLC and CI/CD pipeline in the process. Examples include the NIST AI Risk Management Framework and the OWASP AI Security and Privacy Guide.

Reduce the threat of synthetic data-based attacks by integrating biometric patterns and passwordless authentication technology into every identity access management system. VentureBeat understands that synthetic data is increasingly being used to impersonate identities and gain access to source code and model repositories. Consider using a combination of biometric modalities (including facial recognition, fingerprint scanning, and voice recognition) along with passwordless access technologies to secure systems used across MLOps. Gen AI has been shown to help generate synthetic data. MLOps teams will increasingly combat deepfake threats, so a layered approach to protecting access will soon become a must.

Audit verification systems frequently and randomly to keep access permissions up to date. As synthetic identity attacks begin to emerge as one of the most challenging threats, it is critical to keep verification systems up to date with patches and have them audited. VentureBeat believes that the next generation of identity attacks will be primarily based on synthetic data aggregated together to appear legitimate.

#adversarial #cyberthreat #coming


Discover more from Yawvirals Gurus' Zone

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

Index

Discover more from Yawvirals Gurus' Zone

Subscribe now to keep reading and get access to the full archive.

Continue reading