Tech | Visa | Scholarship/School | Info Place

Zscaler finds enterprise AI adoption jumps 600% in less than a year, putting data at risk

Join us in Atlanta on April 10 to explore the future of a safe workforce. We’ll explore the vision, benefits, and use cases of artificial intelligence for security teams. Request an invitation here.


The number of companies blocked increased by 577% in just 9 months.

CISOs and the businesses they protect have good reason to be cautious and block a record number of AI/ML deals. Attackers have fine-tuned their espionage techniques and have now weaponized LLMs to attack organizations without their knowledge. Adversarial AI is also a growing threat because it is a cyber threat that no one saw coming.

The ThreatLabz 2024 AI Security Report released today by Zscaler quantifies why enterprises need scalable cybersecurity strategies to protect the many AI/ML tools they are using. Data protection, AI data quality management and privacy concerns dominated the findings. ThreatLabz analyzed how enterprises use AI and ML tools today, based on more than 18 billion transactions on the Zscaler zero-trust exchange from April 2023 to January 2024.

The adoption of AI/ML tools in healthcare, finance and insurance, services, technology, and manufacturing and the risks of cyberattacks have given us a sobering reminder that these industries are unprepared for AI-based attacks. Manufacturing generates the most AI traffic, accounting for 20.9% of all AI/ML transactions, followed by finance and insurance (19.9%) and services (16.8%).

VB event

Artificial Intelligence Impact Tour – Atlanta

Continuing our tour, we will head to Atlanta for the AI ​​Impact Tour stop on April 10th. This exclusive, invitation-only event in partnership with Microsoft will discuss how generative AI is transforming the security workforce. Space is limited, please request an invitation now.

request an invitation

Blocking a deal is a quick, temporary win

CISOs and their security teams are choosing to block a record number of AI/ML tool deals to prevent potential cyberattacks. This is a strong move to protect the most vulnerable industries from the brunt of cyberattacks.

ChatGPT is the most used and blocked AI tool today, followed by OpenAI, Fraud.net, Forethought and Hugging Face. The most blocked domains are Bing.com, Divo.ai, Drift.com and Quillbot.com.

Credit: Enterprises blocked more than 2.6 billion transactions between April 2023 and January 2024.

The manufacturing industry blocked only 15.65% of AI transactions, a low number considering the industry’s exposure to cyberattacks, especially ransomware. The finance and insurance industry had the largest proportion of blocking AI transactions at 37.16%, indicating rising concerns about data security and privacy risks. Worryingly, despite the fact that the healthcare industry handles sensitive health data and personally identifiable information (PII), blocked AI transactions are still below the average of 17.23%, indicating their failure to protect the data involved in AI tools May lag.

Causing disruption in time- and life-sensitive businesses like healthcare and manufacturing can result in ransomware payouts that are far higher than in other businesses and industries. The recent UnitedHealthcare ransomware attack is an example of how a well-planned attack can take down an entire supply chain.

Blocking is a short-term solution to a larger problem

Making better use of all available telemetry and deciphering the vast amounts of data that cybersecurity platforms are able to capture is the first step to moving beyond lockdown. CrowdStrike, Palo Alto Networks and Zscaler improve their ability to derive new insights from telemetry.

“One of the areas we’re really pioneering is that we can take weak signals from different endpoints,” CrowdStrike co-founder and CEO George Kurtz told a keynote audience at the company’s annual Fal.Con event last year. “We can correlate those to discover New detection results. We are now extending this to our third-party partners so that we can look at other weak signals not only across endpoints but across domains and come up with novel detections.”

Leading cybersecurity vendors have deep expertise in artificial intelligence, many with decades of experience in machine learning, including Blackberry Persona, Broadcom, Cisco Security, CrowdStrike, CyberArk, Cybereason, Ivanti, SentinelOne, Microsoft, McAfee, Sophos and VMWare Carbon Black. Look for these vendors to train their LL.M.s on AI-driven attack data in an attempt to stay consistent with attackers’ accelerating use of adversarial AI.

A new, deadlier artificial intelligence threat has emerged

“For enterprises, AI-driven risks and threats fall into two broad categories: data protection and security risks associated with enabling enterprise AI tools, and risks from the new cyber threat landscape driven by generative AI tools and automation ,” Zscaler reports.

Chief information security officers (CISOs) and their teams face the daunting challenge of protecting their organizations from the AI ​​attack techniques outlined in the report. Preventing employee negligence when using ChatGPT and ensuring confidential data is not accidentally shared should be a topic in the boardroom. They should prioritize risk management as the core of their cybersecurity strategy.

Protecting intellectual property from leaking outside the organization through ChatGPT, containing shadow AI, and gaining data privacy and security rights are at the core of an effective AI/ML tooling strategy.

Last year, VentureBeat spoke with Alex Philips, CIO of National Oilwell Varco (NOV), about the company’s approach to generative AI. Phillips told VentureBeat that he was tasked with educating the board on the overall benefits and risks of ChatGPT and generative AI. He provides regular updates to the Board on the current state of GenAI technology. This ongoing education process helps set expectations for the technology and how NOV puts guardrails in place to ensure leaks like Samsung’s never happen. He mentioned how powerful ChatGPT is as a productivity tool and how important it is to ensure security while controlling shadow AI.

Balancing productivity and security is critical to meeting the challenges of the new, unknown AI threat environment. Zscaler’s CEO was targeted in a phishing and SMS scam scenario, with threat actors impersonating Zscaler CEO Jay Chaudhry’s voice in WhatsApp messages in an attempt to trick employees into buying gift cards and leaking more information. Zscaler was able to stop the attack using their system. VentureBeat has learned that this is a common attack pattern targeting leading CEOs and technology leaders in the cybersecurity industry.

Attackers are relying on artificial intelligence to launch ransomware attacks at scale and faster than in the past. Zscaler notes that AI-driven ransomware attacks are part of the arsenal of today’s nation-state attackers and are being used with increasing frequency. Attackers are now using generative AI prompts to create tables of known vulnerabilities for all firewalls and VPNs in their target organizations. Next, the attacker uses LLM to generate or optimize code exploits for these vulnerabilities and tailor the payload to the target environment.

Generative AI can also be used to identify weaknesses in an enterprise’s supply chain partners while highlighting the best routes to connect to the core enterprise network, Zscaler noted. Even if organizations maintain a strong security posture, downstream vulnerabilities often pose the greatest risk. Attackers continue to experiment with generative artificial intelligence, creating feedback loops to improve the results of more complex and targeted attacks that are harder to detect.

Attackers aim to leverage generative AI across the entire ransomware attack chain—from automated reconnaissance and code exploitation for specific vulnerabilities to the generation of polymorphic malware and ransomware. By automating key parts of the attack chain, threat actors can produce faster, more sophisticated and more targeted attacks against enterprises.

Image Source: Attackers are using artificial intelligence to streamline their attack strategies and reap greater rewards by causing more disruption to target organizations and their supply chains.

#Zscaler #finds #enterprise #adoption #jumps #year #putting #data #risk

Leave a Reply

Your email address will not be published. Required fields are marked *

Index