Model Concepts (OWL Ontology)
Extension ontology for AI ethics and governance. Provides element
types, properties, and reference data for managing AI system risk classification,
conformity assessment, bias assessment, explainability documentation, human oversight
plans, and governance policies.
Builds on the ML-Enabled Systems extension (mlsys:) by adding the governance layer
that connects ML components to regulatory frameworks (EU AI Act), ethical principles
(OECD AI Principles), and organizational AI governance policies.
Motivated by Gartner 2025 Leadership Vision identifying AI ethics and governance as
a critical gap in EA teams, and by the EU AI Act (Regulation 2024/1689) requiring
formal risk classification and conformity assessment for high-risk AI systems.
https://meta.linked.archi/ai-governance/onto#
Formal Rules (SHACL Shapes)
SHACL shapes for validating AI governance models. Enforces
governance rules: every AI system must have a risk classification, high-risk
systems must have conformity assessments and human oversight plans, and all
AI systems must have explainability documentation.
https://meta.linked.archi/ai-governance/shapes#
Concept Classification (SKOS)
Classification of AI governance concepts by risk level, principle, lifecycle phase, and assessment type.
https://meta.linked.archi/ai-governance/tax#AIGovernanceConceptScheme
Reference Data
Reference data for AI governance — EU AI Act risk levels,
OECD AI Principles, human oversight modes, and assessment statuses.
https://meta.linked.archi/ai-governance/reference-data#
| Type | Resource | Description | URI |
| Model Concepts (OWL Ontology) |
onto |
Extension ontology for AI ethics and governance. Provides element
types, properties, and reference data for managing AI system risk classification,
conformity assessment, bias assessment, explainability documentation, human oversight
plans, and governance policies.
Builds on the ML-Enabled Systems extension (mlsys:) by adding the governance layer
that connects ML components to regulatory frameworks (EU AI Act), ethical principles
(OECD AI Principles), and organizational AI governance policies.
Motivated by Gartner 2025 Leadership Vision identifying AI ethics and governance as
a critical gap in EA teams, and by the EU AI Act (Regulation 2024/1689) requiring
formal risk classification and conformity assessment for high-risk AI systems. |
https://meta.linked.archi/ai-governance/onto# |
| Formal Rules (SHACL Shapes) |
shapes |
SHACL shapes for validating AI governance models. Enforces
governance rules: every AI system must have a risk classification, high-risk
systems must have conformity assessments and human oversight plans, and all
AI systems must have explainability documentation. |
https://meta.linked.archi/ai-governance/shapes# |
| Concept Classification (SKOS) |
AI Governance Concept Scheme |
Classification of AI governance concepts by risk level, principle, lifecycle phase, and assessment type. |
https://meta.linked.archi/ai-governance/tax#AIGovernanceConceptScheme |
| Reference Data |
reference-data |
Reference data for AI governance — EU AI Act risk levels,
OECD AI Principles, human oversight modes, and assessment statuses. |
https://meta.linked.archi/ai-governance/reference-data# |