Tech | Visa | Scholarship/School | Info Place

EU publishes election security guidance for social media giants and other companies within DSA

The EU published draft election security guidance on Tuesday, targeting about two dozen (larger) platforms with more than 45 million monthly active users, which are regulated by the Digital Services Act (DSA) and therefore have laws in place to mitigate systemic risks. obligations such as political deepfakes, while protecting fundamental rights such as free speech and privacy.

Platforms in scope include Facebook, Google Search, Instagram, LinkedIn, TikTok, YouTube and X, among others.

The Commission has listed elections as one of the few priority areas for the implementation of DSA on so-called Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs). In addition to complying with a full online governance regime, companies regulated by the DSA must also identify and mitigate systemic risks, such as information manipulation targeting the region’s democratic processes.

In line with the EU’s election security guidance, the EU expects regulated tech giants to step up efforts to protect democratic votes and deploy effective content moderation resources in multiple official languages ​​used across the EU to ensure they have enough staff to effectively respond to emergencies. Risks source information from the feeds on their platform and act on reports from third-party fact-checkers—and risk hefty fines if they fail.

This will require platforms to strike a precise balance in moderation of political content – ​​not lagging behind in their ability to distinguish between political satire, which should remain online as a protected form of free speech, and creators of malicious political disinformation. May hope to sway voters and distort elections.

In the latter case, the content falls under the DSA systemic risk classification, and the platform is expected to quickly detect and mitigate it. EU standards require them to take “reasonable, proportionate and effective” mitigation measures against risks associated with the electoral process and respect other relevant provisions of extensive content moderation and governance regulation.

The commission has been rapidly developing electoral guidance and launched a consultation on a draft version last month. The sense of urgency in Brussels stems from the upcoming European Parliament elections in June. Officials said they will conduct a stress test of the platform’s readiness next month. The EU therefore appears unprepared to leave platform compliance to chance, even with hard laws in place, meaning tech giants risk hefty fines if they fail to meet the Commission’s expectations this time around.

User controls for algorithm feeds

The key to the EU’s electoral guidelines for mainstream social media companies and other major platforms is that they should provide users with more meaningful choices than algorithmic and AI-driven recommendation systems, so that they can impose certain restrictions on the type of content they see. control.

“Recommendation systems can play an important role in shaping the information landscape and public opinion,” the guidance states. “To mitigate the risks such systems may pose to the electoral process, [platform] Providers…should consider: (i.) Ensuring that recommendation systems are designed and adapted in a manner that provides users with meaningful choice and control over their feeds, with appropriate consideration for media diversity and pluralism. “

Platform recommendation systems should also take steps to downrank election-specific disinformation, such as deceptive content that is fact-checked to be false, based on what the guidance calls a “clear and transparent approach”; and/or from sources repeatedly found to be spreading disinformation. Posts from the account.

Platforms must also deploy mitigations to avoid the risk of their recommendation systems spreading AI-based disinformation, also known as political deepfakes. They should also proactively assess their recommendation engines for risks related to the election process and roll out updates to mitigate risks. The EU also recommends that the design and functionality of AI-driven feeds be transparent; and urges platforms to engage in adversarial testing, red teams, etc. to enhance their ability to detect and eliminate risks.

Regarding GenAI, the EU proposal also urges the watermarking of synthetic media – while noting the limits of technical feasibility here.

The mitigations and best practices recommended for large platforms in the 25-page draft guidance released today also set out the expectation that platforms will mobilize internal resources to focus on specific electoral threats, such as those surrounding upcoming election campaigns, and develop Processes are used to share relevant information and risk analysis.

Resource allocation should have local expertise

The guidance emphasizes the need for analysis of “locally specific risks” in addition to the collection of Member State-specific/national and regional information to support the work of entities responsible for designing and calibrating risk mitigation measures. For “adequate content moderation resources”, local language skills and knowledge of national and/or regional context and specificities – this has been a long-standing EU complaint in platforms’ efforts to reduce the risk of disinformation.

Another recommendation is that they establish “a dedicated, clearly identifiable internal team” ahead of the election period to strengthen internal processes and resources around each election campaign – with resourcing commensurate with the risks identified for the relevant election.

EU guidance also explicitly recommends hiring staff with local expertise, including language knowledge. And platforms often seek to repurpose centralized resources, not always seeking dedicated local expertise.

“The team should cover all relevant expertise, including areas such as content moderation, fact-checking, threat disruption, hybrid threats, cybersecurity, disinformation and FIMI [foreign information manipulation and interference]fundamental rights and public participation, and in cooperation with relevant external experts, such as with the European Digital Media Observatory (EDMO) Center and independent fact-checking organizations,” the EU also wrote.

The guidance allows platforms to ramp up resources around specific election campaigns and dismantle teams after voting closes.

It noted that the period during which additional risk mitigation measures may be required may vary, depending on the level of risk and EU Member States’ specific rules regarding elections (which may vary). But the Commission recommends that platforms deploy mitigation measures and have them up and running at least one to six months before the election period and continue to operate for at least one month after the election.

Not surprisingly, the period leading up to the election date is expected to see the greatest intensity of mitigation efforts to address risks such as disinformation targeting the voting process.

Hate speech within the framework

The EU generally recommends that platforms draw on other existing codes, including the Code of Conduct on Disinformation and the Code of Conduct on Countering Hate Speech, to identify best practices for mitigation measures. But it stipulates that they must ensure users have access to official information about the electoral process, such as banners, links and pop-ups designed to direct users to authoritative sources of election information.

“In mitigating systemic risks to electoral integrity, the Commission recommends that due consideration should also be given to the impact of measures to address illegal content, such as public incitements to violence and hatred, as such illegal content may stifle or silence voices in democratic electoral debates. , particularly those debates that represent disadvantaged or minority groups,” the committee wrote.

“For example, various forms of racism, gendered disinformation and gender-based violence, including in the context of violent extremist or terrorist ideologies or FIMI targeting LGBTIQ+ communities, can undermine open, democratic dialogue and debate and undermine Further exacerbating social divisions and polarization. In this regard, the Code of Conduct for Combating Unlawful Hate Speech Online can serve as inspiration when considering appropriate action.”

It also recommends that they develop media literacy campaigns and adopt measures aimed at providing users with more contextual information, such as fact-checking labels; prompts and nudges; clear identification of official accounts; identification of member states, third countries and those controlled or financed by third countries Clear and non-deceptive labeling of accounts operated by entities; tools and information to help users assess the trustworthiness of information sources; tools to assess provenance; and establishing processes to combat abuse of any of these programs and tools – which looks like Like the list of things Elon Musk has dismantled since taking over Twitter (now X).

Notably, Musk has also been accused of allowing hate speech to flourish on the platform during his tenure. As of this writing, X remains under investigation by the EU for a range of alleged DSA violations, including related to content moderation requirements.

Transparency to enhance accountability

On political advertising, the guidance sets the stage for upcoming transparency rules in the sector – advising them to take immediate steps to bring themselves into compliance and prepare for legally binding regulation. (For example, by clearly labeling political ads, providing information about the sponsors behind these paid political messages, maintaining a public repository of political ads, and establishing systems to verify the identity of political advertisers.)

Elsewhere, the guidance also sets out how to address electoral risks associated with influencers.

Platforms should also establish systems that can weed out disinformation and are urged to provide “stable and reliable” access to data to third parties that conduct reviews and research on election risks, according to the guidance. Access to data used to study electoral risks should also be provided free of charge, the recommendation states.

More broadly, the guidance encourages platforms to work with watchdogs, civil society experts and each other when sharing information about election security risks, urging them to establish communication channels for tips and risk reporting during elections.

To handle high-risk incidents, the recommendation recommends that platforms establish an internal incident response mechanism that includes senior leadership and maps other relevant stakeholders within the organization to drive accountability around their election incident response and avoid the risk of passing the buck. .

Following the election, the EU recommended that platforms conduct and publish reviews of their performance, taking into account third-party assessments (i.e., rather than simply seeking to flag their own homework, as they have historically preferred, in an attempt to put a PR polish on ongoing platform manipulation) risk).

The election security guidance is therefore not mandatory, but if platforms choose methods other than those recommended to deal with threats in this area, they must be able to demonstrate that their alternatives meet EU standards.

If they fail to do so, they risk being found to be in breach of the DSA, which provides for penalties of up to 6% of global annual turnover for confirmed breaches. Platforms therefore have an incentive to participate in EU plans to reduce regulatory risks by increasing resources to address political disinformation and other electoral information risks. But they still need to implement the recommendations.

The EU guidance also contains further specific recommendations for the upcoming European Parliament elections from 6 to 9 June.

From a technical perspective, the election security guidance is still in the draft stage. But the committee said it expects formal adoption in April once the guidance is available in all languages.

#publishes #election #security #guidance #social #media #giants #companies #DSA

Leave a Reply

Your email address will not be published. Required fields are marked *