Practice Guide: AI Ethics & Governance for Enterprise Architecture¶
This guide shows how to use the Linked.Archi AI Ethics & Governance extension to manage AI system governance within your architecture knowledge graph. It addresses the gap identified by Gartner's 2025 Leadership Vision — that EA teams lack AI ethics and governance frameworks — and provides a practical approach aligned with the EU AI Act and OECD AI Principles.
Extension artifacts:
extensions/ai-governance/ai-governance-onto.ttl— Classes, properties, viewpoints, concernsextensions/ai-governance/ai-governance-tax.ttl— SKOS taxonomy (by risk level, principle, lifecycle, assessment type)extensions/ai-governance/ai-governance-shapes.ttl— SHACL governance rulesextensions/ai-governance/ai-governance-metamodel.ttl— Metamodel manifest (entry point)extensions/ai-governance/ai-governance-reference-data.ttl— EU AI Act risk levels, OECD principles, oversight modes
The Problem¶
Organizations deploying AI systems face a governance gap:
- Regulatory pressure — The EU AI Act (Regulation 2024/1689) requires formal risk classification, conformity assessment, and human oversight for high-risk AI systems. Compliance deadlines are phased through 2027.
- Missing governance layer — The ML-Enabled Systems extension provides the technical layer (models, pipelines, serving infrastructure) but not the governance layer that connects these components to regulatory and ethical frameworks.
- EA sidelined from AI — Gartner reports that executives perceive EA teams as lacking AI expertise. Without a formal governance vocabulary, EA cannot contribute to AI strategy discussions.
- No traceability — When an auditor asks "which AI systems are high-risk and what assessments have been performed?", the answer requires manual compilation from scattered documents.
The AI Ethics & Governance extension solves this by making risk classifications, conformity assessments, bias assessments, explainability reports, and human oversight plans into first-class elements in the knowledge graph — queryable, validatable, and traceable to the ML components they govern.
Quick Start¶
Step 1: Import the Extension¶
Your ontology imports the AI governance extension alongside the ML Systems extension:
@prefix aigov: <https://meta.linked.archi/ai-governance/onto#> .
@prefix mlsys: <https://meta.linked.archi/ml-systems/onto#> .
@prefix arch: <https://meta.linked.archi/core#> .
<https://model.example.com/my-system#>
a owl:Ontology ;
owl:imports <https://meta.linked.archi/ai-governance/onto#> ;
owl:imports <https://meta.linked.archi/ml-systems/onto#> ; ## technical layer
owl:imports <https://meta.linked.archi/archimate3/onto#> ; ## optional
.
Step 2: Wrap Your ML Models in Governance¶
The aigov:AISystem is the governance wrapper — it connects the technical ML components
to the governance artifacts:
@prefix aigov: <https://meta.linked.archi/ai-governance/onto#> .
@prefix aigovrd: <https://meta.linked.archi/ai-governance/reference-data#> .
@prefix mlsys: <https://meta.linked.archi/ml-systems/onto#> .
@prefix : <https://model.example.com/my-system#> .
## The ML components (from the ML Systems extension)
:FraudDetectionModel a mlsys:MLModel ;
skos:prefLabel "Fraud Detection Model v3.1"@en ;
mlsys:hasModelVersion "3.1.0" ;
mlsys:trainedOn :TransactionDataset ;
mlsys:hasMonitoringPlan :FraudMonitor .
:FraudModelServer a mlsys:ServingInfrastructure ;
skos:prefLabel "Fraud Model Serving Service"@en ;
mlsys:serves :FraudDetectionModel .
## The governance wrapper
:FraudDetectionAISystem a aigov:AISystem ;
skos:prefLabel "Fraud Detection AI System"@en ;
skos:definition '''AI system that scores payment transactions for fraud risk
in real-time. Used in credit card authorization flow.'''@en ;
aigov:wrapsMLModel :FraudDetectionModel ;
aigov:wrapsServingInfra :FraudModelServer ;
aigov:governedBy :CorporateAIPolicy .
Step 3: Classify the Risk Level¶
Every AI system must have a risk classification under the EU AI Act:
:FraudRiskClassification a aigov:RiskClassification ;
aigov:classifiedAs aigovrd:HighRisk ;
aigov:classificationRationale '''Credit scoring falls under Annex III, Section 5(b)
of the EU AI Act — AI systems used to evaluate creditworthiness or
establish credit scores of natural persons.'''@en ;
aigov:classificationDate "2026-03-01"^^xsd:date ;
aigov:classifiedBy :ChiefDataOfficer .
:FraudDetectionAISystem aigov:hasRiskClassification :FraudRiskClassification .
Step 4: Document Assessments¶
High-risk AI systems require conformity assessments, bias assessments, and explainability documentation:
## Conformity assessment
:FraudConformityAssessment a aigov:ConformityAssessment ;
skos:prefLabel "Fraud Detection Conformity Assessment Q1 2026"@en ;
aigov:assessedAgainstPrinciple aigovrd:FairnessPrinciple,
aigovrd:TransparencyPrinciple,
aigovrd:AccountabilityPrinciple ;
aigov:assessmentResult "Conditional — fairness remediation required"@en ;
aigov:assessmentDate "2026-03-15"^^xsd:date ;
aigov:assessedBy :AIEthicsBoard ;
aigov:assessmentFindings '''Demographic parity gap of 3.2% exceeds the 2%
organizational threshold. Model shows higher false positive rates for
transactions from certain geographic regions.'''@en ;
aigov:remediationAction '''Retrain model with geographically balanced dataset.
Implement post-processing calibration. Reassess within 60 days.'''@en .
:FraudDetectionAISystem aigov:hasConformityAssessment :FraudConformityAssessment .
## Bias assessment
:FraudBiasAssessment a aigov:BiasAssessment ;
skos:prefLabel "Fraud Detection Bias Assessment — Geographic Fairness"@en ;
aigov:assessedAgainstPrinciple aigovrd:FairnessPrinciple ;
aigov:assessmentResult "Conditional — geographic bias detected"@en ;
aigov:assessmentDate "2026-03-10"^^xsd:date ;
aigov:assessedBy :FraudDataScientist ;
aigov:assessmentFindings '''False positive rate for transactions originating
from Eastern European countries is 4.1% vs 1.8% global average.
Root cause: training data overrepresents Western European transaction
patterns.'''@en ;
aigov:remediationAction '''Augment training data with balanced geographic
representation. Apply fairness-aware regularization during training.'''@en .
:FraudDetectionAISystem aigov:hasBiasAssessment :FraudBiasAssessment .
## Explainability report
:FraudExplainabilityReport a aigov:ExplainabilityReport ;
skos:prefLabel "Fraud Detection Explainability Report"@en ;
skos:definition '''SHAP-based feature importance analysis for the fraud detection
model. Provides both global explanations (which features matter most across
all predictions) and local explanations (why a specific transaction was
flagged).'''@en .
:FraudDetectionAISystem aigov:hasExplainabilityReport :FraudExplainabilityReport .
Step 5: Define Human Oversight¶
:FraudOversightPlan a aigov:HumanOversightPlan ;
skos:prefLabel "Fraud Detection Human Oversight Plan"@en ;
aigov:oversightMode aigovrd:HumanOnTheLoop ;
aigov:oversightResponsible :FraudAnalystTeam ;
aigov:escalationProcedure '''Transactions flagged with confidence > 0.95 are
automatically blocked. Transactions with confidence 0.7-0.95 are queued
for human review. Fraud analysts review queued transactions within 30
minutes during business hours.'''@en ;
aigov:interventionConditions '''Human intervention required when: (1) false positive
rate exceeds 3% for any geographic region, (2) model drift alert triggers,
(3) customer complaint about wrongful blocking.'''@en .
:FraudDetectionAISystem aigov:hasHumanOversightPlan :FraudOversightPlan .
Step 6: Validate with SHACL¶
Run the SHACL shapes to check governance compliance:
The shapes enforce rules like:
- Every AI system must have a risk classification
- Every risk classification must have a rationale and a risk level
- High-risk AI systems must have conformity assessments, bias assessments, and human oversight plans
- Every conformity assessment must reference at least one principle and have a result
- Every human oversight plan must specify an oversight mode and a responsible stakeholder
The Extension in Detail¶
Element Types¶
| Element | Description |
|---|---|
aigov:AISystem |
Governance wrapper around ML components — the unit of governance |
aigov:RiskClassification |
Risk level assignment with rationale and date |
aigov:ConformityAssessment |
Formal assessment against regulatory requirements |
aigov:BiasAssessment |
Assessment of bias in data, model, or deployment |
aigov:ExplainabilityReport |
Documentation of how AI decisions can be explained |
aigov:HumanOversightPlan |
Plan for human oversight with mode, roles, and escalation |
aigov:AIGovernancePolicy |
Organizational policy governing AI development |
aigov:AIIncident |
Record of an AI system incident |
aigov:TransparencyRecord |
Disclosure documentation for users and affected persons |
aigov:DataGovernanceRecord |
Data governance measures for training/evaluation data |
Reference Data¶
EU AI Act Risk Levels: UnacceptableRisk, HighRisk, LimitedRisk, MinimalRisk
AI Principles (OECD + EU): FairnessPrinciple, TransparencyPrinciple,
AccountabilityPrinciple, SafetyPrinciple, PrivacyPrinciple,
HumanAgencyPrinciple, SocialWellbeingPrinciple
Human Oversight Modes: HumanInTheLoop, HumanOnTheLoop, HumanInCommand
Concerns¶
| Concern | Description |
|---|---|
aigov:RegulatoryComplianceConcern |
Compliance with EU AI Act, GDPR, sector regulations |
aigov:AIAccountabilityConcern |
Clear assignment of responsibility for AI behavior |
aigov:AITransparencyConcern |
Disclosure of AI capabilities and limitations |
aigov:HumanOversightConcern |
Appropriate human control over AI decisions |
Viewpoints¶
| Viewpoint | Targets | Concerns | View Type |
|---|---|---|---|
| AI Governance Overview | Ethics Officer | Regulatory compliance, accountability | Catalog, Matrix |
| AI Risk Assessment | Ethics Officer, Data Scientist | Regulatory compliance, human oversight | Matrix, Catalog |
| AI Transparency & Disclosure | Ethics Officer | Transparency | Catalog |
| AI Incident Tracking | Ethics Officer, ML Engineer | Accountability, compliance | Catalog |
Composing with Other Extensions¶
AI Governance + ML Systems¶
This is the primary composition. The AI governance extension adds the governance layer on top of the ML Systems technical layer:
┌─────────────────────────────────────────────────┐
│ AI Ethics & Governance (aigov:) │
│ Risk classification, conformity assessment, │
│ bias assessment, human oversight, incidents │
├─────────────────────────────────────────────────┤
│ ML-Enabled Systems (mlsys:) │
│ Models, datasets, pipelines, serving, monitoring │
├─────────────────────────────────────────────────┤
│ arch:core │
│ Elements, relationships, viewpoints, governance │
└─────────────────────────────────────────────────┘
AI Governance + Architecture Decisions¶
Governance choices become traceable decisions:
:ADR-AIGovernanceFramework a ad:Decision ;
skos:prefLabel "ADR-051: Adopt EU AI Act compliance framework"@en ;
ad:justification '''The EU AI Act requires formal risk classification and
conformity assessment for high-risk AI systems. Our fraud detection and
credit scoring systems fall under Annex III. We adopt the Linked.Archi
AI Governance extension as the formal framework.'''@en ;
ad:influencedByForce :Force-RegulatoryCompliance, :Force-ReputationRisk ;
arch:refines :FraudDetectionAISystem, :CreditScoringAISystem .
AI Governance + ArchiMate¶
AI systems can be linked to ArchiMate application components:
:FraudScoringService a am:ApplicationService ;
skos:prefLabel "Fraud Scoring Service"@en ;
am:serves :PaymentAuthorizationProcess .
:FraudModelServer mlsys:integratesWith :FraudScoringService .
## The governance wrapper covers the full chain
:FraudDetectionAISystem aigov:wrapsServingInfra :FraudModelServer .
SPARQL Queries¶
Which AI systems are high-risk and lack conformity assessments?¶
PREFIX aigov: <https://meta.linked.archi/ai-governance/onto#>
PREFIX aigovrd: <https://meta.linked.archi/ai-governance/reference-data#>
SELECT ?system ?label WHERE {
?system a aigov:AISystem ;
skos:prefLabel ?label ;
aigov:hasRiskClassification ?rc .
?rc aigov:classifiedAs aigovrd:HighRisk .
FILTER NOT EXISTS { ?system aigov:hasConformityAssessment ?ca }
}
What is the governance status of all AI systems?¶
PREFIX aigov: <https://meta.linked.archi/ai-governance/onto#>
SELECT ?system ?label ?riskLevel
(COUNT(DISTINCT ?ca) AS ?conformityAssessments)
(COUNT(DISTINCT ?ba) AS ?biasAssessments)
(BOUND(?hop) AS ?hasOversightPlan)
WHERE {
?system a aigov:AISystem ;
skos:prefLabel ?label ;
aigov:hasRiskClassification ?rc .
?rc aigov:classifiedAs ?riskLevel .
OPTIONAL { ?system aigov:hasConformityAssessment ?ca }
OPTIONAL { ?system aigov:hasBiasAssessment ?ba }
OPTIONAL { ?system aigov:hasHumanOversightPlan ?hop }
}
GROUP BY ?system ?label ?riskLevel ?hop
Which ML models have bias assessments that need remediation?¶
PREFIX aigov: <https://meta.linked.archi/ai-governance/onto#>
PREFIX mlsys: <https://meta.linked.archi/ml-systems/onto#>
SELECT ?model ?modelLabel ?result ?remediation WHERE {
?system aigov:wrapsMLModel ?model ;
aigov:hasBiasAssessment ?ba .
?model skos:prefLabel ?modelLabel .
?ba aigov:assessmentResult ?result ;
aigov:remediationAction ?remediation .
FILTER(CONTAINS(?result, "Conditional") || CONTAINS(?result, "Fail"))
}
Which AI systems have had incidents?¶
PREFIX aigov: <https://meta.linked.archi/ai-governance/onto#>
SELECT ?system ?label ?incidentDate ?severity ?description WHERE {
?system a aigov:AISystem ;
skos:prefLabel ?label ;
aigov:hasIncident ?incident .
?incident aigov:incidentDate ?incidentDate ;
aigov:incidentSeverity ?severity ;
aigov:incidentDescription ?description .
}
ORDER BY DESC(?incidentDate)
Validation¶
Syntax Validation¶
.scripts/validate.sh --syntax extensions/ai-governance/ai-governance-onto.ttl
.scripts/validate.sh --syntax extensions/ai-governance/ai-governance-tax.ttl
.scripts/validate.sh --syntax extensions/ai-governance/ai-governance-shapes.ttl
.scripts/validate.sh --syntax extensions/ai-governance/ai-governance-metamodel.ttl
.scripts/validate.sh --syntax extensions/ai-governance/ai-governance-reference-data.ttl
SHACL Validation¶
Once the SHACL profile is registered:
This validates that:
- Every AI system has a risk classification with rationale
- High-risk systems have conformity assessments, bias assessments, and human oversight plans
- Every conformity assessment references principles and has a result
- Every human oversight plan specifies a mode and responsible stakeholder
- Every AI incident has a date, description, and severity
References¶
- EU AI Act — Regulation 2024/1689
- OECD AI Principles
- EU Ethics Guidelines for Trustworthy AI
- ISO/IEC 42001:2023 — AI Management System
- NIST AI Risk Management Framework
- Gartner 2025 Leadership Vision for EA
- ML-Enabled Systems Practice Guide — Companion guide for the technical ML layer
- Gartner 2025 Extensions — Cross-extension composition and queries