Tech | Visa | Scholarship/School | Info Place

Artificial intelligence deepfakes increasingly a risk for organizations in Asia Pacific

Not long ago, AI deepfakes were not on organizations’ risk radars, but by 2024, they are moving up the rankings. AI deepfakes are likely to be a risk for some time because of their potential to cause everything from plummeting stock prices to loss of brand trust through misinformation.

Robert Huber, chief security officer and director of research at cybersecurity company Tenable, told TechRepublic that AI deepfakes could be used by a range of malicious actors. While detection tools are still maturing, businesses in Asia Pacific can prepare and better protect their content by adding deepfakes to their risk assessment.

Ultimately, when international norms converge around AI, organizations may be more protected. Huber called on big tech platform players to develop stronger and clearer identification of AI-generated content, rather than leaving it to non-expert individual users.

Artificial intelligence deep fakes pose rising risks to society and businesses

The risk of misinformation and disinformation generated by artificial intelligence is becoming a global risk. Following the launch of a wave of generative AI tools in 2023, the 2024 risk category overall became the second largest risk in the World Economic Forum’s 2024 Global Risks Report (Figure A).

Figure A

Artificial intelligence misinformation is possible "material crisis on a global scale" By 2024, according to the 2024 Global Risks Report.
According to the 2024 Global Risks Report, AI misinformation has the potential to become a “major global crisis” in 2024.Image source: World Economic Forum

More than half (53%) of respondents from business, academia, government and civil society see AI-generated misinformation and disinformation, including deepfakes, as a risk. Misinformation was also rated as the biggest risk factor over the next two years (Picture B).

Picture B

The risk of misinformation and disinformation is expected to be higher in the short term and remain in the top five over 10 years.
The risk of misinformation and disinformation is expected to be higher in the short term and remain in the top five over 10 years.Image: World Economic Forum

Businesses haven’t been quick to consider the risks of AI deepfakes. Aon’s Global Risk Management Survey, for example, did not mention this, despite concerns among organizations that AI could cause business disruption or damage to their brand and reputation.

Huber said the risk of AI deepfakes remains and is changing as AI changes rapidly. However, he said Asia-Pacific organizations should consider the risk. “This is not necessarily a cyber risk. This is an enterprise risk,” he said.

AI deepfakes give nearly every threat actor a new tool

AI deepfakes are expected to become another option used by any adversary or threat actor to achieve their goals. Huber said this could include nation-states with geopolitical goals and activist groups with idealistic agendas whose motivations include financial gain and influence.

“You’re going to have everything here, from nation-state groups to environmentally conscious groups to hackers who just want to profit from counterfeit currency. I think this is another tool in any malicious actor’s toolbox,” Huber explained .

How to see Generative AI could increase global threat of ransomware

The low cost of deepfakes means there is a low barrier to entry for malicious actors

The ease of use of AI tools and the low cost of producing AI materials means there are few obstacles for malicious actors to take advantage of new tools. One difference from the past, Huber said, is the level of quality now at threat actors’ fingertips.

“years ago, [cost] The barriers to entry are low, but so is the quality,” Huber said. “The barriers to entry are still low, but [with generative AI] The quality is greatly improved. Therefore, it is increasingly difficult for most people to identify deepfakes on their own without additional clues. “

What risks does AI deepfakes pose to organizations?

Huber said the risks of AI deepfakes are “so urgent” that they are not on the risk assessment agenda for organizations in Asia Pacific. However, referring to the recent state-sponsored cyberattacks against Microsoft reported by Microsoft itself, he invited people to ask: What if this was a deepfake?

“Whether it’s misinformation or influence, Microsoft is bidding for large contracts for its businesses with different governments and justifications around the world. This will prove the credibility of businesses like Microsoft, or for that matter any large tech organization.”

Enterprise contract losses

Any type of for-profit business can be affected by AI deepfake material. For example, the creation of misinformation could cause problems or lost contracts around the world, or cause social concern or reaction against an organization that could harm its prospects.

physical security risks

AI deepfakes could add a new dimension to the critical risk of business disruption. For example, AI-sourced misinformation may cause a disturbance, or even a perception of disturbance, resulting in danger to natural persons or operations, or simply a perception of danger.

Brand and Reputation Impact

Forrester has released a list of potential deepfake scams. These include risks to organizational reputation and brand or employee experience and human resources. One risk is amplification, where AI deepfakes are used to spread other AI deepfakes to reach a wider audience.

financial impact

Financial risks include the ability to use AI deepfakes to manipulate stock prices and the risk of financial fraud. Recently, a finance employee at a Hong Kong multinational was deceived into paying US$25 million (AU$40 million) to criminals after using a sophisticated AI deepfake scam to impersonate the company’s chief financial officer during a video conference call.

Personal judgment is not the solution to deepfakes for organizations

A big problem facing organizations in Asia Pacific is that AI deepfake detection is difficult for everyone. While regulators and technology platforms are adapting to advances in artificial intelligence, much of the responsibility for identifying deepfakes falls on individual users, not intermediaries.

This can see the beliefs of individuals and groups impacting organizations. In an environment that may contain media and social media misinformation, individuals are asked to judge in real time whether damaging stories about brands or employees are true or deepfakes.

Individual users have no ability to distinguish fact from fiction

Huber said it’s “problematic” to expect people to be able to tell what an AI-generated deepfake is and what isn’t. He believes that currently, AI deepfakes are difficult to identify even for technical professionals, and individuals who lack experience in identifying AI deepfakes will have a difficult time.

“It’s like saying, ‘We’re going to train everyone about cyber security.'” Now, ACSC (Australian Cyber ​​Security Center) has a lot of good guidance on cyber security, but apart from people actually working in cyber security, Who else really understands these instructions? ” he asks.

Prejudice is also a factor. “If you’re looking at material that’s important to you, you’re bringing in a bias; you’re less likely to pay attention to the nuances of an action or gesture, or whether an image is 3D. If you’re interested in the content, you won’t Use those Spider-Man senses to look for anomalies.”

Tools for detecting AI deepfakes are catching up

Tech companies are working to provide tools to combat the growth of artificial intelligence deepfakes. For example, Intel’s real-time FakeCatcher tool is designed to identify deepfakes by using video pixels to evaluate the blood flow of humans in videos, and uses “why we are human” to identify fakes.

Huber said the ability to detect and identify AI-powered deepfakes is still emerging. After looking at some of the tools available on the market, he said he has nothing in particular to recommend at the moment because “the field is moving so fast.”

What will help organizations address AI deepfake risks?

Huber said the rise of AI deepfakes could lead to a “cat and mouse” game between malicious actors who generate deepfakes and those who try to detect and block them. As a result, the tools and capabilities that help detect AI deepfakes are likely to change rapidly as an “arms race” triggers a real-world war.

There are some defense organizations available.

The formation of international artificial intelligence regulatory norms

Australia is one of the jurisdictions seeking to regulate AI content through measures such as watermarking. As other jurisdictions around the world reach consensus on AI governance, best practice approaches to support better identification of AI content are likely to converge.

Huber said that while this was very important, some actors would not abide by international norms. “There has to be an implicit understanding that no matter what regulations we put in place or how we try to minimize this, there are still people who are going to do it.”

look: Summary of new EU artificial intelligence regulations

A large technology platform that identifies artificial intelligence deepfakes

A critical step for large social media and technology platforms like Meta and Google is to better combat AI deepfake content and identify it more clearly for users on the platform. Taking on more responsibility means that non-expert end-users such as organizations, employees and the public have less work to do when trying to identify whether something is a deepfake.

Huber said this will also help IT teams. Having a large technology platform to identify AI deepfakes and provide users with more information or tools will shift this task away from organizations; the IT investment required to purchase and manage deepfake detection tools and allocate security resources to manage it will be reduced. .

Adding AI deepfakes to risk assessments

Asia-Pacific organizations may soon need to consider making the risks associated with AI deepfakes part of their regular risk assessment procedures. For example, Huber said organizations may need to be more proactive in controlling and protecting the content the organization generates, both internally and externally, and documenting those measures for third parties.

“Most established security companies conduct third-party risk assessments of their vendors. I’ve never seen any issues related to how they protect digital content,” he said. Huber predicts that third-party risk assessments conducted by tech companies may soon need to include issues related to minimizing the risk of deepfakes.

#Artificial #intelligence #deepfakes #increasingly #risk #organizations #Asia #Pacific

Leave a Reply

Your email address will not be published. Required fields are marked *

Index