The National Institute of Standards and Technology (NIST), a U.S. Department of Commerce agency that develops and tests technologies for the U.S. government, business, and the general public, has re-released a testbed designed to measure the extent to which malicious attacks, specifically those that “pollut” training data for AI models, can degrade the performance of AI systems.
First released in 2022, a modular, open-source, web-based tool called Dioptra (named after a classic astronomical and surveying instrument) aims to enable companies that train AI models and those who use them to assess, analyze, and track AI risks. NIST said Dioptra can be used to benchmark and investigate models, as well as provide a common platform for exposing models to simulated threats in a “red team” environment.
“Testing the effects of adversarial attacks on machine learning models is one of Dioptra’s goals,” NIST said in a press release. “Available for free download, the open-source software can help communities such as government agencies and small and medium-sized businesses evaluate AI developers’ claims about system performance.”
Dioptra debuted alongside a document from NIST and its recently established AI Safety Laboratory, which outlines ways to mitigate some of the dangers of AI, such as the potential for AI to be misused to generate non-consensual pornography. It follows the release of the UK AI Safety Laboratory’s Inspect, a toolset similarly aimed at assessing model functionality and overall model safety. The US and UK have an ongoing partnership to jointly develop advanced AI model testing, as announced at the UK’s AI Safety Summit at Bletchley Park last November.
Dioptra is also a product of President Joe Biden’s Executive Order (EO) on AI, which (among other things) requires NIST to cooperate in testing AI systems. The EO also establishes standards for AI safety and security, and requires companies developing models (such as Apple) to notify the federal government before releasing the models to the public and share the results of all safety testing.
As we’ve written before, AI benchmarking is hard. It’s especially difficult because today’s most sophisticated AI models are black boxes — their infrastructure, training data, and other key details are kept secret by the companies that created them. A report published this month by the Ada Lovelace Institute, a UK-based nonprofit research institute that studies AI, found that evaluations alone are not enough to determine how safe an AI model actually is, in part because current policies allow AI vendors to cherry-pick which evaluations they perform.
NIST did not claim that Dioptra could completely mitigate the risk in its models. But the agency did say do Dioptra proposes that it will be possible to uncover what types of attacks can degrade the performance of an AI system and quantify the impact on performance.
But a major limitation is that Dioptra only works out of the box with models that you can download and use locally, like Meta’s extended Llama family. API-restricted models, like OpenAI’s GPT-4o, aren’t available, at least for the time being.