Tech | Visa | Scholarship/School | Info Place

IT and security pros are ‘cautiously optimistic’ about artificial intelligence

According to a report released by the Cloud Security Alliance commissioned by Google Cloud, senior executives are more familiar with artificial intelligence technology than IT and security personnel. The report, released on April 3, discusses whether IT and security professionals are worried about AI replacing their jobs, the benefits and challenges of the growth of generative AI, and more.

63% of IT and security professionals surveyed believe AI will improve security within their organizations. Another 24% are neutral on the impact of AI on security measures, while 12% do not believe AI will improve security within their organization. Only a tiny minority (12%) of those surveyed predict that artificial intelligence will replace their jobs.

The survey used to create the report was conducted globally, with responses from 2,486 IT and security professionals and C-suite leaders from organizations across the Americas, Asia Pacific and Europe, Middle East and Africa in November 2023 .

Non-leadership cybersecurity professionals are less clear than C-suite executives about possible use cases for AI in cybersecurity, with only 14% of employees (compared to 51% at the executive level) saying they are “very aware” “.

Caleb Sima, chair of the Cloud Security Alliance’s AI Security Initiative, said in a press release: “The disconnect between C-suite executives and employees in understanding and implementing AI highlights the need for a strategic, unified approach to successfully integrate this technology. “

Some questions in the report explicitly state that the answer should be related to generating artificial intelligence, while other questions use the term “artificial intelligence” broadly.

AI knowledge gap in security

C-level professionals face top-down pressure, which may result in them understanding AI use cases better than security professionals.

Many (82%) executive professionals say their executive leadership and boards are driving AI adoption. However, the report noted that this approach could lead to implementation issues.

“This may highlight a lack of awareness of the difficulty and knowledge required to adopt and implement such a uniquely disruptive technology such as just-in-time engineering,” said Hillary Baron, senior technical director of research and analysis at the Cloud Security Alliance wrote. and a team of contributors.

This knowledge gap exists for several reasons:

  • Cybersecurity professionals may not quite understand how AI impacts overall strategy.
  • Leaders may underestimate the difficulty of implementing an AI strategy within existing cybersecurity practices.

The authors of the report point out that some data (Figure A) shows that respondents are as familiar with generative AI and large language models as they are with older terms like natural language processing and deep learning.

Figure A

Infographic showing responses to the instruction
Response to the instruction “Evaluate your familiarity with the following artificial intelligence technologies or systems.”Image: Cloud Security Alliance

The report’s authors note that familiarity with older terms like natural language processing and deep learning dominates, which may indicate confusion between generative AI and popular tools like ChatGPT.

“It’s the difference between being familiar with consumer-grade GenAI tools versus being familiar with professional/enterprise-grade tools, which is more important in terms of adoption and implementation,” Baron said in an email to TechRepublic. “This is something we see commonly among security professionals at all levels.”

Will artificial intelligence replace cybersecurity jobs?

A small group (12%) of security professionals believe artificial intelligence will completely replace their jobs within the next five years. Others are more optimistic:

  • 30% believe AI will help enhance some of their skills.
  • 28% predict AI will fully support their current role.
  • 24% believe artificial intelligence will replace most of their roles.
  • 5% don’t expect AI to impact their role at all.

Organizational AI goals reflect this, with 36% seeking outcomes where AI enhances the skills and knowledge of their security teams.

The report points out an interesting difference: While improving skills and knowledge are highly desired outcomes, talent ranks at the bottom of the list of challenges. This could mean immediate tasks like identifying threats take priority in day-to-day operations, while talent is a longer-term concern.

The benefits and challenges of artificial intelligence in cybersecurity

The group was divided on whether AI would be more beneficial to defenders or attackers:

  • 34% believe AI will be more beneficial for security teams.
  • 31% believe this benefits defenders and attackers alike.
  • 25% believe this is more beneficial to attackers.

Professionals concerned about the use of AI in security cite the following reasons:

  • Poor data quality leads to unexpected biases and other problems (38%).
  • Lack of transparency (36%).
  • There is a skills/expertise gap in managing complex AI systems (33%).
  • Data poisoning (28%).

Illusion, privacy, data breach or loss, accuracy and misuse are other options that people may worry about; all of these options received less than 25% of the vote in the survey, which invited respondents to choose the three that they were most concerned about. question.

See: UK National Cyber ​​Security Center finds generative AI could bolster attackers’ arsenal. (Technology Republic)

As to whether they are concerned about the potential risks of over-reliance on artificial intelligence in cybersecurity, more than half (51%) of the respondents said “yes”; the other 28% were neutral.

Planned uses of generative AI in cybersecurity

Among organizations planning to use generative AI for cybersecurity, its intended uses are very broad (Picture B). Common uses include:

  • Rule creation.
  • Attack simulation.
  • Compliance violation monitoring.
  • Network detection.
  • Reduce false positives.

Picture B

Infographic showing responses to the question: How does your organization plan to use generative AI for cybersecurity?  (Select the top 3 use cases).
Answer to question: How does your organization plan to use generative AI for cybersecurity? (Select the top 3 use cases).Image: Cloud Security Alliance

How organizations build teams in the era of artificial intelligence

Of those surveyed, 74% said their organizations planned to form new teams within the next five years to oversee the safe use of AI. The structure of these teams may vary.

Today, some organizations working on AI deployment leave it to security teams (24%). Other organizations leave primary responsibility for AI deployment to the IT department (21%), data science/analytics teams (16%), dedicated AI/ML teams (13%), or senior management/leadership (9%). In rare cases, responsibility was held by DevOps (8%), cross-functional teams (6%), or teams that did not fit into any category (listed as “other” in 1%).

See: Recruitment Kit: Instant Engineer (TechRepublic Premium)

“It is clear that artificial intelligence in cybersecurity is not only transforming existing roles but also paving the way for new professional positions,” write lead author Hillary Baron and a team of contributors.

What kind of position? Baron told TechRepublic that generative AI governance is a growing subfield, as is AI-focused training and upskilling.

“In general, we’re also starting to see job postings that include more AI-specific roles, such as prompt engineers, AI security architects and security engineers,” Baron said.

#security #pros #cautiously #optimistic #artificial #intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *