Tech | Visa | Scholarship/School | Info Place

India grapples with election misinformation, weighs labels and its own AI safety alliance

India, for a long time When it comes to using technology to persuade the public, artificial intelligence has become a global hot topic regarding how artificial intelligence is used and abused in political discourse, especially in democratic processes. The tech companies that originally developed these tools are traveling to the country to promote their solutions.

Earlier this year, Adobe Senior Director Andy Parsons, who oversees participation in Adobe’s cross-industry Content Authenticity Initiative (CAI), traveled to India to visit the country’s media and technology organizations. was caught in a whirlpool. Promote tools that can be integrated into content workflows to identify and tag AI content.

“We as a society, instead of detecting what is fake or manipulated, which is a matter of international concern, should start declaring authenticity, which means saying whether something is artificial or not,” he said in an interview. Intelligently generated, consumers should know.”

Parsons added that a number of Indian companies – which are currently not part of the Munich AI election security agreement signed in February by OpenAI, Adobe, Google and Amazon – intend to build a similar alliance in the country.

“Legislation is a very tricky thing. The assumption that the government will legislate correctly and quickly enough in any jurisdiction is difficult to rely on. The government would be better off taking a very steady approach and taking its time,” he said.

Detection tools are notoriously inconsistent, but they are a start to solving some of the problems, or so the argument goes.

“The concept is well understood,” he said during his trip to Delhi. “I’m helping to raise awareness that the tools are ready, too. This isn’t just an idea. This is something that’s already deployed.”

Andy Parsons, Senior Director, Adobe

Andy Parsons, Senior Director, Adobe. Image Source: adobe

CAI, which promotes royalty-free open standards for identifying whether digital content was generated by machines or humans, predates the current hype around generative AI: it was founded in 2019 and currently has 2,500 members, including Microsoft, Meta and Google, The New York Times, The Wall Street Journal and the BBC.

Just as there is a growing industry using artificial intelligence to create media, there is also a smaller industry being created that attempts to correct some of the more nefarious applications.

So, in February 2021, Adobe took it a step further and built one of the standards itself, co-founding the Content Provenance and Authenticity Alliance (C2PA) with ARM, BBC, Intel, Microsoft and Truepic. The consortium aims to develop an open standard that uses metadata for images, videos, text and other media to highlight their origins and tell people where a file originated, where and when it was generated, and whether the file was altered before arriving. . user. CAI partners with C2PA to promote the standard and make it available to the public.

Now it is actively working with governments such as India to expand adoption of the standard to highlight the sources of AI content and work with authorities to develop guidelines for AI development.

Adobe has nothing to gain and everything to lose by taking an active role in this game. It hasn’t acquired or built a large language model of its own, but as home to applications like Photoshop and Lightroom, it’s a market leader in tools for the creative community, so it’s not just building new products like Firefly to generate AI content itself, But artificial intelligence is being integrated into traditional products. If the market develops as some believe it will, artificial intelligence will be a must-have if Adobe wants to stay ahead of the curve. If regulators (or common sense) have their way, Adobe’s future may well depend on its ability to successfully ensure that the products it sells don’t cause chaos.

In any case, the overall situation in India is indeed a mess.

Google uses India as a testbed to test how to ban its generative AI tool Gemini on electoral content; Parties are weaponizing AI to create memes that resemble opponents; Meta sets up a deepfake for WhatsApp “Helpline” to reflect the messaging platform’s popularity in spreading information about AI; At a time when countries are increasingly concerned about AI safety and what must be done to ensure it, we must look at the Indian government’s March What are the implications of deciding to relax rules on how new AI models are built, tested and deployed. Regardless, this is certainly intended to spur more AI activity.

Using its open standards, C2PA has developed a content digital nutrition label called Content Credentials. CAI members are working to deploy digital watermarks on their content to let users know its origin and whether it was generated by artificial intelligence. Adobe has content credentials across its creative tools, including Photoshop and Lightroom. It also automatically attaches to AI content generated by Firefly, Adobe’s AI model. Last year, Leica launched cameras with built-in content credentials, while Microsoft added content credentials to all AI-generated images created using Bing Image Creator.

AI-generated content credentials on images

Image Source: Content Credentials

Parsons told TechCrunch that CAI is in talks with governments around the world in two areas: to help promote the standard as an international standard, and to adopt the standard.

“In an election year it’s particularly important for candidates, parties, current offices and governments to release material to the media and the public all the time to make sure people know if the Prime Minister publishes something [Narendra] Modi’s office is actually Prime Minister Modi’s office. There are many incidents where this is not the case. So it’s important to know that something is actually true for consumers, fact-checkers, platforms and intermediaries,” he said.

He added that curbing misinformation is challenging in India due to its large population, wide range of languages ​​and diverse population, voting in favor of simple labels to combat this.

“It’s kind of like ‘CR’… like most Adobe tools, it’s two Western letters, but that shows there’s more context that needs to be shown,” he said.

Debate continues over the real meaning behind tech companies supporting any kind of AI security measures: Is it really about existential concerns, or is it just about having a seat at the table and giving the impression of existential concerns, while ensuring that their security interests are protected during the rulemaking process?

In his defence, he said: “This is generally not controversial for the companies involved and all the companies that signed the recent Munich agreement, including Adobe who joined forces, have reduced competitive pressure because these ideas are ours Everything needs to be done.” work.

#India #grapples #election #misinformation #weighs #labels #safety #alliance

Leave a Reply

Your email address will not be published. Required fields are marked *