18 Countries join forces to safeguard AI systems against misuse

No items found.

This collaboration reflects global efforts to develop AI safely

The United States, Britain, and over a dozen other countries have introduced the first international agreement focusing on ensuring the safety of artificial intelligence (AI) against misuse by rogue actors. The 18 countries have outlined a 20-page non-binding agreement that emphasizes the need for AI systems to be "secure by design."

The agreement includes general recommendations, such as monitoring AI systems for misuse, safeguarding data from tampering, and vetting software suppliers. Although lacking binding legal force, the agreement signifies a significant step in prioritizing AI safety, according to Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency. The framework addresses concerns about preventing AI technology from being exploited by hackers but does not address issues related to the appropriate use of AI or data collection methods.

The agreement reflects global efforts to shape AI development, but its effectiveness remains uncertain due to its non-binding nature. Europe has been more proactive in regulating AI, with lawmakers working on AI rules and agreements regarding "mandatory self-regulation through codes of conduct" for foundational AI models. In contrast, the United States has faced challenges in passing effective AI regulations, despite efforts by the Biden administration to address AI risks and enhance national security through executive orders.

Visit Website

Related articles

More News

Subscribe to Thaka 
Whatsapp
Service

Start Free Trial

Subscribe to Thaka 
Whatsapp
Service

Start Free Trial
Join Thousands of subscribers! 🥳