How to Comply with the EU AI Act? US NIST Proposes New "Machine-Readable" Compliance Standard
NIST proposes OSCAL to make AI governance framework compliance evidence machine-readable. Good news for companies facing EU AI Act compliance.
TITLE: How to Comply with the EU AI Act? US NIST Proposes New “Machine-Readable” Compliance Standard SLUG: nist-oscal-ai-compliance-machine-readable CATEGORY: ai EXCERPT: NIST proposes OSCAL to make AI governance framework compliance evidence machine-readable. Good news for companies facing EU AI Act compliance. TAGS: AI regulation, EU AI Act, NIST, compliance, governance, OSCAL, machine-readable IMAGE_KEYWORDS: AI regulation, compliance, document, EU flag, NIST, machine readable, governance framework
Background: The “Container” Problem for EU AI Act Compliance
The EU AI Act, which came into effect in 2024, mandates strict compliance for high-risk AI systems. However, there is no clear guidance on how to create and manage the “evidence” needed to demonstrate compliance with this regulation. While international frameworks such as ISO/IEC 42001 and NIST AI RMF specify “what should be assured,” they do not address the practical challenge of “how to generate evidence in machine-readable format.”
This is where OSCAL (Open Security Controls Assessment Language), proposed by NIST (the US National Institute of Standards and Technology), comes in. The paper “Making AI Compliance Evidence Machine-Readable,” published on arXiv, details the possibilities and implementation methods for applying OSCAL to the AI governance domain.
What Exactly is OSCAL?
OSCAL is a framework developed by NIST for standardizing assessment evidence for security controls, originally developed for security controls. Now it is being applied to AI governance.
Traditional compliance evidence has been created and maintained manually as PDF or Word documents. However, this approach presents challenges:
- Difficult for regulatory agencies to conduct efficient reviews
- Complicated to perform comparative analysis between companies
- Cannot be integrated into automated audit processes
OSCAL enables compliance evidence to be described in machine-readable formats such as JSON, XML, and YAML. This allows for the automatic generation and verification of risk assessments, test results, and audit trails for AI systems.
Impact on Companies: Why It’s Gaining Attention Now
The EU AI Act will be fully implemented in 2026, and compliance certification will be essential for selling AI products in the European market. By adopting OSCAL:
1. Significant Reduction in Verification Costs Transition from manual document creation and review to automated verification processes.
2. International Interoperability Provides a unified format that can address multiple frameworks, including the EU AI Act, ISO/IEC 42001, and NIST AI RMF.
3. Audit Transparency Regulatory agencies can efficiently verify the authenticity of evidence, enabling reliable compliance demonstrations.
Future Prospects and Challenges
The application of OSCAL to AI governance is still in its early stages. The paper proposes mapping methods for EU AI Act risk classifications and technical requirements to OSCAL, as well as specific implementation architectures, but widespread adoption as a standard is expected to take time.
However, as the full implementation of the EU AI Act approaches, interest in OSCAL is rapidly growing as the only concrete solution to the practical challenge of “how to create compliance evidence.” For Japanese companies, this standard trend is worth watching for those considering expanding AI products into the European market.
FAQ
Q: Is it impossible to achieve EU AI Act compliance without adopting OSCAL? A: No, the EU AI Act does not mandate the use of OSCAL. However, OSCAL can significantly streamline the process of creating, verifying, and reviewing compliance evidence. For companies operating in multiple markets, adopting OSCAL with its international interoperability will be advantageous in the long term.
Q: Is AI regulation compliance also necessary in Japan? A: Japan has not yet decided to implement mandatory AI regulations like the EU AI Act. However, the Ministry of Economy, Trade and Industry and the Ministry of Internal Affairs and Communications are advancing the development of AI governance guidelines. A standardized approach like OSCAL provides flexibility to address future domestic regulations.
Q: Where should companies starting with OSCAL begin? A: First, determine whether the company’s AI systems fall into the high-risk categories under the EU AI Act and conduct risk assessment and mapping. Then, based on frameworks such as NIST AI RMF and ISO/IEC 42001, it is recommended to initiate pilot projects to generate evidence in OSCAL format.
Comments