Anthropic to Fund Initiative to Develop New Third-Party AI Benchmarks to Assess AI Models

Share

Anthropic announced a new initiative to develop new benchmarks to test the capabilities of advanced artificial intelligence (AI) models on Tuesday. The AI firm will be funding the project and has invited applications from interested entities. The company said that the existing benchmarks are not enough to fully test the capabilities and the impact of the newer large language models (LLMs). As a result, a new set of evaluations focused on AI safety, advanced capabilities, and its societal impact is needed to be developed, stated Anthropic.

Anthropic to fund new benchmarks for AI models

In a newsroom post, Anthropic highlighted the need for a comprehensive third-party evaluation ecosystem to overcome the limited scope of current benchmarks. The AI firm announced that through its initiative, it will fund third-party organisations that want to develop new assessments for AI models focused on quality and high safety standards.

For Anthropic, the high-priority areas include tasks and questions that can measure an LLM’s AI Safety Levels (ASLs), advanced capabilities in generating ideas and responses, as well as the societal impact of these capabilities.

Under the ASL category, the company highlighted several parameters that include the capability of the AI models to assist or act autonomously in running cyberattacks, the potential of the models to assist in the creation of or enhancing the knowledge of creating chemical, biological, radiological and nuclear (CBRN) risks, national security risk assessment, and more.

In terms of advanced capabilities, Anthropic highlighted that the benchmarks should be capable of assessing AI’s potential to transform scientific research, participation and refusal towards harmfulness, and multilingual capabilities. Further, the AI firm said it is necessary to understand the potential of an AI model to impact society. For this, the evaluations should be able to target concepts such as “harmful biases, discrimination, over-reliance, dependence, attachment, psychological influence, economic impacts, homogenization, and other broad societal impacts.”

Apart from this, the AI firm also listed some principles for good evaluations. It said evaluations should not be available in training data used by AI as it often turns into a memorisation test for the models. It also encouraged keeping between 1,000 to 10,000 tasks or questions to test the AI. It also asked organisations to use subject matter experts to create tasks that test performance in a specific domain.


Affiliate links may be automatically generated – see our ethics statement for details.