Tech | Visa | Scholarship/School | Info Place

Privacy experts warn that Google’s call-scanning AI could increase censorship by default

Google demonstrated a feature at its I/O conference yesterday that uses its generative artificial intelligence technology to scan voice calls in real time for conversation patterns related to financial scams, a feature that sent a collective shiver down the spine of privacy and security experts. The feature represents the thin end of a wedge, the warning states. They warn that once client-side scanning is integrated into mobile infrastructure, it could usher in an era of centralized censorship.

Google demonstrated a call fraud detection feature that the tech giant said will be built into a future version of its Android operating system and is expected to run on about three-quarters of the world’s smartphones. The feature is powered by Gemini Nano Powered by Gemini Nano, the smallest product from Google. The current generation of AI models is meant to run entirely on-device.

This is essentially client-side scanning: an emerging technology that has caused huge controversy in recent years for detecting child sexual abuse material (CSAM) and even grooming campaigns on messaging platforms.

Apple abandoned plans to deploy client-side scanning for CSAM in 2021 after facing huge privacy backlash. However, policymakers continue to pressure the tech industry to find ways to detect illegal activity occurring on its platforms. Therefore, any industry move to build on-device scanning infrastructure is likely to pave the way for all kinds of content scanning to be made by default – whether government-led or related to a specific commercial agenda.

Respond to Google’s call scanning demo Post on XMeredith Whittaker, president of US-based encrypted messaging app Signal, warned: “This is extremely dangerous. It lays the foundation for centralized device-level client-side scanning.

“It’s a short step from detecting ‘scams’ to ‘detecting patterns commonly associated with w’.[ith] seeking reproductive care” or “usually associated with w[ith] Providing LGBTQ resources” or “often relevant to tech worker reporting.”

Matthew Green, a cryptography expert and professor at Johns Hopkins University, also said Bring to X Alert. “In the future, AI models will reason about your text messages and voice calls to detect and report illegal behavior,” he warned. “In order for your data to pass through the service provider, you need to attach a zero-knowledge proof for scanning. This will prevent open customers.”

Green says this dystopian future of default censorship is technically only a few years away. “We are still a long way from a sufficiently efficient implementation of this technology, but it’s only a few years away. Ten years at the most,” he suggested.

European privacy and security experts were also quick to express their opposition.

Reaction to Google demo on XLukasz Olejnik, a Polish independent researcher and consultant on privacy and security issues, welcomed the company’s anti-fraud capabilities but warned that the infrastructure could be repurposed for social surveillance. “[T]It also means that technological capabilities have been or are being developed to monitor calls, creations, written texts or documents, for example to look for content that is illegal, harmful, hateful or otherwise undesirable or unfair based on one’s standards. ” he wrote.

“Going a step further, such a model could display warnings. Or prevent the ability to continue,” Oleinik continued. “Or reporting somewhere. Technical regulation of social behavior, etc. This is a major threat to privacy and a major threat to a series of basic values ​​and freedoms. The capabilities are already there.”

Olejnik further elaborated on his concerns, telling TechCrunch: “I haven’t seen the technical details, but Google guarantees that detection will be done on the device. This is great for user privacy. However, there are more important issues than privacy . This highlights how AI/LLM built into software and operating systems can be used to detect or control various forms of human activity.

This highlights how AI/LLM built into software and operating systems can be used to detect or control various forms of human activity.

Lucas Oleinik

“So far, fortunately, things are looking up. But what does the future hold if the technological capabilities exist and are built-in? Such powerful capabilities bode well for a future related to the use of artificial intelligence to control society at scale or selectively The potential risks associated with this capability. This is probably one of the most dangerous information technology capabilities ever. How do we govern this?”

Michael Veale, associate professor of technology law at University College London (UCL), also raised the issue of creepy feature creep brought about by Google’s conversation-scanning AI – responding to warnings Post on X It “does more than establish infrastructure for on-device client scanning that regulators and legislators hope will be abused.”

Privacy experts in Europe are particularly concerned: Since 2022, the EU has been debating a controversial message-scanning legislative proposal that critics, including the bloc’s own data protection chief, warn represents a turning point for democratic rights in the bloc . zone as it forces the platform to scan private messages by default.

While current legislative proposals claim to be technology-agnostic, it is widely expected that such a law would lead to platforms deploying client-side scans to be able to respond to so-called “detection orders” requiring them to discover known and unknown things CSAM can also acquire in real time Grooming activities.

Earlier this month, hundreds of privacy and security experts wrote an open letter warning that the program could lead to millions of false positives per day because of unproven client-side scanning technology that platforms may deploy in response to legal orders. , is seriously flawed and vulnerable to attack.

Google has been contacted for response over concerns that its conversation-scanning AI may be invading people’s privacy, but it had not responded as of press time.

Read more about Google I/O 2024 at TechCrunch


#Privacy #experts #warn #Googles #callscanning #increase #censorship #default

Leave a Reply

Your email address will not be published. Required fields are marked *