Tech | Visa | Scholarship/School | Info Place

UK and US agree to work together to develop safe testing of artificial intelligence models

The UK government has formally agreed to work with the US to develop tests of advanced artificial intelligence models. British Technology Secretary Michelle Donelan and U.S. Commerce Secretary Gina Raimondo signed a non-legally binding memorandum of understanding on April 1, 2024 (Figure A).

Figure A

US Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan.
U.S. Commerce Secretary Gina Raimondo (left) and British Technology Secretary Michelle Donelan (right). Source: UK Government.Image: UK Government

The two countries will now “align their respective scientific approaches” and work together to “accelerate and rapidly iterate robust assessment suites for AI models, systems and agents.” The action follows commitments made at the first Global AI Safety Summit last November, where governments around the world accepted their role in safety testing of next-generation AI models.

What AI initiatives have the UK and US agreed on?

Through the memorandum of understanding, the UK and the US have agreed on how to establish a common approach to AI safety testing and share progress with each other. Specifically, this will involve:

  • Develop a shared process for assessing the safety of AI models.
  • Conduct at least one joint test on a publicly accessible model.
  • Collaborating on AI safety technical research can both advance collective knowledge of AI models and ensure consistency in any new policies.
  • Personnel exchanges between agencies.
  • Share information on all activities carried out by agencies.
  • Work with other governments to develop AI standards, including safety standards.

“Through our collaboration, our institutes will better understand AI systems, conduct more robust assessments, and issue more rigorous guidance,” Secretary Raimondo said in a statement.

SEE: Learn how to use AI for your business (TechRepublic Academy)

The MoU primarily involves advancing plans developed by the UK and US AI Security Institutes. The UK research body was launched at the AI ​​Safety Summit with three main goals: to assess existing AI systems, conduct basic AI safety research and share information with other national and international actors. Companies including OpenAI, Meta and Microsoft have agreed to independent reviews of their latest generative AI models by the UK’s AISI.

Similarly, the U.S. AISI was formally established by NIST in February 2024 to work on priority actions outlined in the AI ​​Executive Order issued in October 2023; these actions include developing security standards for artificial intelligence systems. AISI in the United States is supported by the Alliance of Artificial Intelligence Security Institutes, whose members include Meta, OpenAI, NVIDIA, Google, Amazon and Microsoft.

Will this lead to regulation of AI companies?

Although neither the UK nor US AISI is a regulator, their combined findings may inform future policy changes. The UK government said its AISI “will provide fundamental insights into our governance regime”, while the US agency will “develop technical guidance for use by regulators”.

The EU arguably remains one step ahead, with its landmark Artificial Intelligence Act voted into law on March 13, 2024. The bill outlines measures aimed at ensuring the safe and ethical use of artificial intelligence, as well as other rules regarding facial recognition and transparency in artificial intelligence.

See: Most cybersecurity professionals expect artificial intelligence to impact their jobs

Most large tech companies, including OpenAI, Google, Microsoft and Anthropic, are based in the United States, which currently does not have strict regulations to limit their artificial intelligence activities. The October EO did provide guidance on the use and regulation of AI, and positive steps have been taken since it was signed; however, this legislation is not law. The Artificial Intelligence Risk Management Framework finalized by NIST in January 2023 is also voluntary.

In fact, most of these major technology companies are responsible for self-regulation, and last year launched a cutting-edge model forum to establish their own “guardrails” to reduce the risks of artificial intelligence.

What do AI and legal experts think about security testing?

AI regulation should be a priority

The establishment of the UK AISI is not a universally popular way of controlling artificial intelligence in the country. In February, the CEO of Faculty AI, a company associated with the institute, said developing strong standards might be a more prudent use of government resources rather than trying to censor every AI model.

“I think it’s important that it sets standards for the wider world rather than trying to do everything ourselves,” Mark Warner told the Guardian.

Technology law experts held a similar view when it came to this week’s memorandum of understanding. “Ideally, states’ efforts would be better spent on crafting tough regulations rather than research,” Aron Solomon, a legal analyst and chief strategy officer at legal marketing agency Amplify, told TechRepublic in an email. .”

“But here’s the thing: Very few lawmakers — especially, I would say, the U.S. Congress — understand AI well enough to regulate it.

Solomon added: “We should leave rather than enter a necessary period of deep study in which legislators really focus their collective minds on how AI works and how it will be used in the future. But, as U.S. lawmakers recently tried to As the failure to ban TikTok underscores, they as a group do not understand technology, and therefore they are not equipped to regulate it intelligently.

“That puts us in a difficult situation today. Artificial intelligence is developing much faster than regulators can regulate it. But to delay regulation at this time and take other measures is to postpone the inevitable.”

In fact, as the capabilities of AI models continue to change and expand, the security testing conducted by both institutes will need to do the same. Christoph Cemper, CEO of real-time management platform AIPRM, told TechRepublic in an email that “some bad actors may try to circumvent testing or abuse the dual functionality of AI.” Dual use means that it can be used for both peaceful purposes and Technology used for hostile purposes.

“While testing can flag technical safety issues, it does not replace the need for guidance on ethical, policy and governance issues… Ideally, both governments will view testing as an initial stage in an ongoing collaborative process,” Cemper said.

SEE: Generative AI could increase global ransomware threat, according to a National Cyber ​​Security Center study

Effective AI regulation requires research

Dr Kyle Carlson said that while voluntary guidelines may not be enough to prompt any real changes in the activities of tech giants, tough legislation could stifle progress in artificial intelligence if not properly considered.

“Compromise is a real and growing threat in areas related to AI today,” the former ML/AI analyst, now head of strategy at Domino Data Labs, told TechRepublic in an email. “In fraud and cybercrime, compromise is a real and growing threat.” In other areas, regulation often exists but is ineffective.

“Unfortunately, few of the proposed AI regulations, such as the EU Artificial Intelligence Act, are designed to effectively address these threats, as they mainly focus on commercial AI products that are not used by criminals. As a result, many regulatory measures will harm innovation and increase costs, while doing little to improve actual safety.”

As a result, many experts believe that prioritizing research and collaboration is more effective than rushing into UK and US regulations

Dr. Carlson said: “Regulation is effective in preventing established harm caused by known use cases. Today, however, most use cases for artificial intelligence have not been discovered and almost all harms are hypothetical. In contrast, for Research on how to effectively test, reduce risks and ensure the safety of artificial intelligence models is very necessary.

“So the creation and funding of these new AI Security Institutes, along with these international collaborative efforts, is an excellent public investment that not only ensures security but also builds the competitiveness of American and British businesses.”

#agree #work #develop #safe #testing #artificial #intelligence #models

Leave a Reply

Your email address will not be published. Required fields are marked *

Index