Skip to content
Home Google Cloud and CSA: Cybersecurity will see significant adoption of generative AI in 2024, driven by C-suite leadership

Google Cloud and CSA: Cybersecurity will see significant adoption of generative AI in 2024, driven by C-suite leadership

  • by

Join us in Atlanta on April 10 to explore the future of a safe workforce. We’ll explore the vision, benefits, and use cases of artificial intelligence for security teams. Request an invitation here.


The “forbidden department” stereotype in cybersecurity will leave security teams and CISOs locking the door to generative AI tools in their workflows.

Yes, there are dangers with this technology, but the reality is that many security practitioners have already modified AI, and most of them don’t think AI will impact their jobs—in fact, they know about it How useful technology can be.

More than half of organizations will ultimately implement a new generation of AI security tools by the end of the year, according to a new State of Artificial Intelligence and Security survey report released by the Cloud Security Alliance (CSA) and Google Cloud.

“When we hear about artificial intelligence, we assume that everyone is scared,” said Caleb Sima, president of the CSA Security Alliance for Artificial Intelligence. “Every CISO is saying no to artificial intelligence, and it’s a huge security risk. , which is a huge problem.”

VB event

Artificial Intelligence Impact Tour – Atlanta

Continuing our tour, we will head to Atlanta for the AI ​​Impact Tour stop on April 10th. This exclusive, invitation-only event in partnership with Microsoft will discuss how generative AI is transforming the security workforce. Space is limited, please request an invitation now.

request an invitation

But in reality, “artificial intelligence is transforming cybersecurity, providing both exciting opportunities and complex challenges.”

Keep adding implementations—and disconnecting

According to the report, nearly three-quarters (67%) of security practitioners have tested AI specifically for security tasks. Additionally, 55% of organizations will adopt AI security tools this year – with the top use cases being rule creation, attack simulation, compliance breach detection, network detection, false positive reduction and anomaly classification. C-suite executives are largely supportive of this push, as 82% of respondents attested.

Provided by Google Cloud/CSA

Contrary to convention, only 12% of security professionals said they believe artificial intelligence will completely replace their role. Nearly a third (30%) say the technology will enhance their skills, support their role generally (28%) or replace much of their work (24%). An overwhelming majority (63%) said they saw its potential to enhance security measures.

“For some jobs, there’s a lot of joy in having machines do it,” said Anton Chuvakin, a security consultant in Google Cloud’s Office of the CISO.

Sima agreed, adding, “Most people are more inclined to think it will increase their job opportunities.”

But interestingly, executives claim to be more familiar with AI technology than the average employee – 52% versus 11%. Likewise, 51% of employees identified use cases, while only 14% had clear use cases.

“Frankly, most employees don’t have the time,” Sima said. Instead, they are dealing with day-to-day problems as their executives are inundated with AI news from other leaders, podcasts, news sites, newspapers, and a host of other materials.

“The disconnect between C-suite executives and employees in understanding and implementing AI highlights the need for a strategic, unified approach to successfully integrate this technology,” he said.

Extensive application of artificial intelligence in the field of network security

No. Sima said the number one use of AI in cybersecurity is reporting. Typically, members of the security team manually collect the output of various tools and spend “quite a lot of time.” But “artificial intelligence can do it faster and better,” he said. AI can also be used to review rote tasks such as policy or automate playbooks.

But it can also be used more proactively, such as detecting threats, performing endpoint detection and response, finding and fixing vulnerabilities in code, and recommending remediation actions.

“A lot of the action I saw immediately was ‘How do I classify these things?'” Sima said. “There’s a lot of information and alerts. In the security industry, we’re very good at finding bad things, but not very good at determining which of those bad things are the most important.”

He noted that it’s difficult to cut through the noise to determine “what’s real, what’s not, what’s the priority.”

But on its own, AI can catch an email as it arrives and quickly determine whether it’s phishing. The model can ingest data to determine the sender, recipients of an email, and the reputation of a website link, all in an instant while providing inferences about threats, chains, and communication history. In comparison, Sima said, it would take a human analyst at least 5 to 10 minutes to conduct verification.

“They can now say with great confidence ‘This is phishing’ or ‘This is not phishing’,” he said. “It’s just amazing. It happened today, it works today.”

Executives push to push – but there’s a trough ahead

Chuvakin noted that there is “contamination among leaders” when it comes to using artificial intelligence in cybersecurity. They hope to use AI to fill skill and knowledge gaps, enable faster threat detection, increase productivity, reduce errors and misconfigurations, and provide faster incident response, among other things.

However, he noted, “We’re going to hit a trough of disillusionment on this.” He asserted that we’re “near the top of the hype cycle” because a lot of time and money has been invested in AI and expectations are high, but the use cases aren’t yet as clear or clear. after verification.

The focus now is on discovering and applying real-world use cases that will prove to be “magical” by the end of the year.

When real-world examples emerge, “security thinking around AI will change dramatically,” Chuvakin said.

Artificial Intelligence makes low-hanging fruit even lower

But enthusiasm remains intertwined with risk: 31% of respondents to the Google Cloud-CSA survey believe AI will benefit defenders and attackers alike. Additionally, 25% said AI could be more useful against malicious actors.

“Attackers are always at an advantage because they can exploit technology faster,” Simma said.

As many have done before, he compared AI to previous cloud evolutions: “What does the cloud do? The cloud allows attackers to do things at scale.”

Threat actors can now target everyone rather than one purposeful target. AI will further support their efforts by allowing them to become more sophisticated and focused.

Sima noted that, for example, a model could hack someone’s LinkedIn account to gather valuable information to craft a completely believable phishing email.

“It allows me to personalize at scale,” he said. “It makes those low-hanging fruit even lower.”

#Google #Cloud #CSA #Cybersecurity #significant #adoption #generative #driven #Csuite #leadership


Discover more from Yawvirals Gurus' Zone

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

Index

Discover more from Yawvirals Gurus' Zone

Subscribe now to keep reading and get access to the full archive.

Continue reading