A report from Stanford reveals that the largest AI models in the world lack significant transparency

No items found.

Recently, Stanford University introduced the Foundation Model Transparency Index, which aims to establish a benchmark for governments and companies

In a recent report by Stanford HAI (Human-Centered Artificial Intelligence), it has been revealed that prominent developers of AI foundation models, including companies like OpenAI and Meta, are not providing sufficient information regarding the potential societal impact of their models.

Stanford HAI introduced the Foundation Model Transparency Index, which assessed whether the creators of the top 10 AI models disclose details about their development and usage. Among these models, Meta's Llama 2 received the highest score, followed by BloomZ and OpenAI's GPT-4. However, none of them achieved notably high transparency ratings.

The transparency evaluation considered various aspects, relying on 100 indicators related to model construction, functionality, and utilization. Despite Meta, BloomZ, and GPT-4 achieving relatively higher scores, the creators of these models did not disclose any information regarding their societal impact, privacy concerns, copyright, or bias-related matters.

The objective of this index, according to Rishi Bommasani, a lead at the Stanford Center for Research on Foundation Models, is to establish a benchmark for governments and companies. Proposed regulations, such as the EU's AI Act, may soon compel developers of major foundation models to provide transparency reports.

Visit Website

Related articles

More News

Subscribe to Thaka 
Whatsapp
Service

Start Free Trial

Subscribe to Thaka 
Whatsapp
Service

Start Free Trial
Join Thousands of subscribers! 🥳