Linked.Archi

Linked.Archi AI Ethics & Governance Metamodel Definition

Metamodel Manifest

https://meta.linked.archi/ai-governance/metamodel#

v0.1.0 draft aigovmm: Kalin Maldzhanski Linked.Archi Modified: 2026-05-03 License

Metamodel manifest for the AI Ethics & Governance extension. Ties together the AI governance ontology, SKOS taxonomy, SHACL shapes, and reference data into a single discoverable resource. This is the entry point for tools that need to discover all resources that make up the AI Ethics & Governance modeling vocabulary. Designed to compose with the ML-Enabled Systems extension — the ML extension provides the technical layer (models, pipelines, serving), while this extension provides the governance layer (risk classification, conformity assessment, bias assessment, human oversight).

The AI Ethics & Governance metamodel definition, aggregating the AI governance ontology, SKOS taxonomy, SHACL shapes, and reference data. Designed to be composed with the ML-Enabled Systems metamodel and other metamodels (ArchiMate, C4, Backstage) via owl:imports to add AI governance capabilities to any architecture description. Addresses the gap identified by Gartner 2025 Leadership Vision: EA teams lack AI ethics and governance frameworks. Provides formal ontology resources for EU AI Act compliance, OECD AI Principles, and organizational AI governance policies.

Constituent Resources

Model Concepts (OWL Ontology)

onto

Extension ontology for AI ethics and governance. Provides element types, properties, and reference data for managing AI system risk classification, conformity assessment, bias assessment, explainability documentation, human oversight plans, and governance policies. Builds on the ML-Enabled Systems extension (mlsys:) by adding the governance layer that connects ML components to regulatory frameworks (EU AI Act), ethical principles (OECD AI Principles), and organizational AI governance policies. Motivated by Gartner 2025 Leadership Vision identifying AI ethics and governance as a critical gap in EA teams, and by the EU AI Act (Regulation 2024/1689) requiring formal risk classification and conformity assessment for high-risk AI systems.
https://meta.linked.archi/ai-governance/onto#
Formal Rules (SHACL Shapes)

shapes

SHACL shapes for validating AI governance models. Enforces governance rules: every AI system must have a risk classification, high-risk systems must have conformity assessments and human oversight plans, and all AI systems must have explainability documentation.
https://meta.linked.archi/ai-governance/shapes#
Concept Classification (SKOS)

AI Governance Concept Scheme

Classification of AI governance concepts by risk level, principle, lifecycle phase, and assessment type.
https://meta.linked.archi/ai-governance/tax#AIGovernanceConceptScheme
Reference Data

reference-data

Reference data for AI governance — EU AI Act risk levels, OECD AI Principles, human oversight modes, and assessment statuses.
https://meta.linked.archi/ai-governance/reference-data#

Concerns

AIAccountabilityConcern

AITransparencyConcern

HumanOversightConcern

RegulatoryComplianceConcern