New features in the ISO 42001:2023 standard: Artificial intelligence - Management system - Requirements and guidance
January15, 2026
ISO 42001:2023 (First Edition)
"Information technology - Artificial intelligence - Management system"
ISO 42001:2023 is the first edition of this standard for artificial intelligence management systems (AIMS).
Adopting an artificial intelligence management system allows you to:
- ensure the ethical, transparent, reliable and secure use of artificial intelligence (AI)
- comply with applicable regulations
- promote a culture of responsible development and deployment of AI systems
ISO 42001:2023 requirements soon
ISO 42001:2023 requirements quiz soon
PQB T 29v23 Training: ISO 42001:2023 Readiness and free demo (no registration required) coming soon
PQB T 49v23 Training: ISO 42001:2023 Internal Audit and free demo (no registration required) coming soon
PQB T 89v23 Training Package: ISO 42001:2023 Training coming soon
1. SPECIFIC AI REQUIREMENTS
The requirements of ISO 42001 are based on a common structure for management standards (HLS – High Level Structure), which facilitates their integration with other existing management systems (such as ISO 9001 and ISO 27001).
- Context
- Understand the organization's internal and external context, including stakeholders and their expectations
- Define the scope of the AIMS, taking into account AI-related activities, products, and services
- Leadership
- Management must demonstrate its commitment to the AIMS by defining an AI management policy
- Assign clear roles and responsibilities
- Planning
- Identify and assess AI-related risks and opportunities, integrating these elements into the planning processes
- Establish measurable AIMS objectives and plan the actions needed to achieve them
- Support
- Provide the necessary resources (human, technical, and financial)
- Raise awareness and train staff on AI issues and AIMS requirements
- Ensure effective internal and external communication on AI-related matters
- Operation
- Plan and control AI-related business processes, ensuring compliance with legal, regulatory, and ethical requirements
- Document processes and maintain traceability of AI-related activities
- Performance
- Monitor, measure, analyze, and evaluate the performance of the AIMS
- Conduct internal audits to verify compliance with the standard's requirements and identify areas for improvement
- Continual Improvement
- Address nonconformities and implement corrective actions
- Continually improve the AIMS through management reviews and lessons learned analysis
- Annex A: Specific controls
The standard includes an annex detailing specific measures for managing AI-related risks, such as transparency, fairness, security, confidentiality, and the robustness of AI systems
2. CLAUSES IN ACCORDANCE WITH THE HIGH-LEVEL STRUCTURE:
- Scope
- Normative references
- Terms and definitions
- Context
- Leadership
- Planning
- Support
- Operation
- Performance
- Improvement
- Annexes A, B, C, and D
3. Documents, RECORDS
Documented information available, to be retained, identified, and controlled: .gif)
- Scope (§ 4.3)
- Risk actions (§ 6.1.1)
- Declaration of applicability (§ 6.1.3)
- Results of the AI system impact assessment (§ 6.1.4)
- AI objectives (§ 6.2)
- Resources (A.4.2)
- Data resources (A.4.3)
- Tool resources (A.4.4)
- Staff competence (§ 7.2)
- Documents of external origin (§ 7.5.3)
- Operational control (§ 8.1)
- Risk assessment results (§ 8.2)
- Risk treatment results (§ 8.3)
- AI system impact assessment results (§ 8.4)
- Inspection results (§ 9.1)
- Audit program and audit results (§ 9.2.2)
- Management review results (§ 9.3.3)
- Nonconformities, nature, actions, and results (§ 10.2.2)
5. PROCESSES
Processes:.jpg)
- Assess Risk (§ 6.1.2)
- Treat Risk (§ 6.1.3)
- Assess the AI System Impact (§ 6.1.4, A.5.2)
- Control Operational Requirements (§ 8.1)
- Report Concerns (A.3.3)
- Design and develop responsibly (A.6.1.3)
- Develop and improve the AI system (A.7.2)
- Record origin (A.7.5)
- Use the AI system responsibly (A.9.2)
- Ensure responsible supplier approach (A.10.3)
6. POLICY
Policy: .jpg)
- AI Policy (§ 5.2)
7. THE VERB "SHALL" IS USED 206 TIMES
8. DETAILS OF CLAUSES AND SUB-CLAUSES (PARAGRAPHS)
1 Scope
2 Normative references
3 Terms and definitions
4 Contexte of the organizational
4.1 Understanding the organization and its context
4.2 Understanding the needs and expectations of interested parties (stakeholders)
4.3 Determining the scope of the AI management system Implementation
4.4 AI Management System
5 Leadership
5.1 Leadership and commitment
5.2 AI policy
5.3 Roles, responsibilities, and authorities
6 Planning
6.1 Actions to address risks and opportunities
6.1.1 General
6.1.2 AI risk assessment
6.1.3 AI risk treatment
6.1.4 AI system impact assessment
6.2 AI objectives and planning to achieve them
6.3 Planning of changes
7 Support
7.1 Resources
7.2 Competence
7.3 Awareness
7.4 Communication
7.5 Documented information (documentation)
7.5.1 General
7.5.2 Creating and updating documented information
7.5.3 Control of documented information
8 Operation
8.1 Operational planning and control
8.2 AI Risk assessment
8.3 AI Risk treatment
8.4 AI system impact assessment
9 Performance evaluation
9.1 Monitoring, measurement, analysis, and evaluation
9.2 Internal audit
9.2.1 General
9.2.2 Internal audit program
9.3 Management review
9.3.1 General
9.3.2 Management review inputs
9.3.3 Management review outputs
10 Improvement
10.1 Continual improvement
10.2 Nonconformity and corrective action
Annex A (normative) Reference control objectives and controls
A.1 General
A.2 Policies related to AI
A.2.2 AI Policy
A.2.3 Alignment with other organizational policies
A.2.4 Review of the AI policy
A.3 Internal organization
A.3.2 AI roles and responsibilities
A.3.3 Reporting of concerns
A.4 Resources for AI systems
A.4.2 Resource documentation
A.4.3 Data resources
A.4.4 Tooling resources
A.4.5 System and computing resources
A.4.6 Human resources
A.5 Assessing impacts of AI systems
A.5.2 AI system impact assessment process
A.5.3 Documentation of AI system impact assessments
A.5.4 Assessing AI system impact on individuals or groups of individuals
A.5.5 Assessing societal impacts of AI systems
A.6 AI system life cycle
A.6.1 Management guidance for AI system development
A.6.1.2 Objectives for responsible development of AI system
A.6.1.3 Process for responsible AI system design and development
A.6.2 AI system life cycle
A.6.2.2 AI system requirements and specification
A.6.2.3 Documentation of AI system design and development
A.6.2.4 AI system verification and validation
A.6.2.5 AI system deployment
A.6.2.6 AI system operation and monitoring
A.6.2.7 AI system technical documentation
A.6.2.8 AI system recording of event logs
A.7 Data for the AI system
A.7.2 Data for development and enhancement of AI system
A.7.3 Acquisition of data
A.7.4 Quality of data for AI systems
A.7.5 Data provenance
A.7.6 Data preparation
A.8 Information for interested parties of AI systems
A.8.2 System documentation and information for users
A.8.3 External reporting
A.8.4 Communication of incidents
A.8.5 Information for interested parties
A.9 Use of AI systems
A.9.2 Processes for responsible use of AI systems
A.9.3 Objectives for responsible use of AI system
A.9.4 Intended use of the AI system
A.10 Third-party and customer relationships
A.10.2 Allocating responsibilities
A.10.3 Suppliers
A.10.4 Customers
Annex B (normative) Implementation guidance for AI controls
B.1 General
B.2 Policies related to AI
B.2.1 Objective
B.2.2 AI policy
B.2.3 Alignment with other organizational policies
B.2.4 Review of the AI policy
B.3 Internal organization
B.3.1 Objective
B.3.2 AI roles and responsibilities
B.3.3 Reporting of concerns
B.4 Resources for AI systems
B.4.1 Objective
B.4.2 Resource documentation
B.4.3 Data resources
B.4.4 Tooling resources
B.4.5 System and computing resources
B.4.6 Human resources
B.5 Assessing impacts of AI systems
B.5.1 Objective
B.5.2 AI system impact assessment process
B.5.3 Documentation of AI system impact assessments
B.5.4 Assessing AI system impact on individuals or groups of individuals
B.5.5 Assessing societal impacts of AI systems
B.6 AI system life cycle
B.6.1 Management guidance for AI system development
B.6.1.1 Objective
B.6.1.2 Objectives for responsible development of AI system
B.6.1.3 Processes for responsible design and development of AI systems
B.6.2 AI system life cycle
B.6.2.1 Objective
B.6.2.2 AI system requirements and specification
B.6.2.3 Documentation of AI system design and development
B.6.2.4 AI System verification and validation
B.6.2.5 AI system deployment
B.6.2.6 AI system operationand monitoring
B.6.2.7 AI system technical documentation
B.6.2.8 AI system recording of event logs
B.7 Data for AI system
B.7.1 Objective
B.7.2 Data for development and enhancement of AI system
B.7.3 Acquisition of data
B.7.4 Quality of data for AI systems
B.7.5 Data provenance
B.7.6 Data preparation
B.8 Information for interested parties
B.8.1 Objective
B.8.2 System documentation and information for users
B.8.3 External reporting
B.8.4 Communication of incidents
B.8.5 Information for interested parties
B.9 Use of AI systems
B.9.1 Objective
B.9.2 Processes for responsible use of the AI systems
B.9.3 Objectives for responsible use of AI system
B.9.4 Intended use of the AI system
B.10 Third-party and customer relationships
B.10.1 Objective
B.10.2 Allocating responsibilities
B.10.3 Suppliers
B.10.4 Customers
Annex C (informative) Potential AI-related organizational objectives risk sources
C.1 General
C.2 Objectives
C.3 Risk sources
Annex D (informative) Use of the AI management system across domains or sectors
D.1 General
D.2 Integration of AI management system with other management system standards
9. Remarks
The requirement in 6.1.2 b is a inhumane physical and intellectual challenge. See Oxebridge's blog post "iso-risk-management-can-now-be-infallible".
Annex A is normative, but in A.1 General, it states that the use of the listed objectives and controls is not mandatory, which can be confusing.
Annex B is subtitled as normative, but the verb used is "should," which is not very logical.
