language en

Linked.Archi ML-Specific Quality Attributes

Release: 2026-05-03

Modified on: 2026-05-03
This version:
https://meta.linked.archi/ml-systems/quality-attributes/0.1.0#
Revision:
0.1.0
Authors:
Kalin Maldzhanski
Publisher:
Linked.Archi
Source:
https://arxiv.org/abs/2308.05239
License:
http://creativecommons.org/licenses/by/4.0/
Visualization:
Visualize with WebVowl
Cite as:
Kalin Maldzhanski. Linked.Archi ML-Specific Quality Attributes. Revision: 0.1.0. Retrieved from: https://meta.linked.archi/ml-systems/quality-attributes/0.1.0#
Provenance of this page
draft

Linked.Archi ML-Specific Quality Attributes: Overview back to ToC

This ontology has the following classes and properties.

Named Individuals

Linked.Archi ML-Specific Quality Attributes: Description back to ToC

Quality attribute individuals specific to ML-enabled systems. Extends the base quality-attributes extension with ML-specific concerns identified by Moin et al. (2023) and Lewis et al. (2021).

Cross-reference for Linked.Archi ML-Specific Quality Attributes classes, object properties and data properties back to ToC

This section provides details for each class and property defined by Linked.Archi ML-Specific Quality Attributes.

Named Individuals

Adversarial Robustnessni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ml-systems/quality-attributes#AdversarialRobustness

The degree to which an ML model maintains correct behavior when subjected to adversarial inputs — deliberately crafted perturbations designed to cause misclassification. Includes robustness against evasion attacks, data poisoning, and model extraction.
belongs to
quality attribute c

Data Privacy (ML)ni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ml-systems/quality-attributes#DataPrivacy

The degree to which an ML system protects the privacy of individuals whose data was used for training. Includes resistance to model inversion attacks, membership inference attacks, and training data memorization.
belongs to
quality attribute c

Explainabilityni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ml-systems/quality-attributes#Explainability

The degree to which an ML model's predictions can be understood, interpreted, and communicated to stakeholders. Includes local explanations (why this specific prediction) and global explanations (how the model generally behaves). Critical for regulatory compliance (GDPR Article 22), stakeholder trust, and debugging.
belongs to
quality attribute c

Fairnessni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ml-systems/quality-attributes#Fairness

The degree to which an ML model treats individuals or groups equitably, without systematic discrimination based on protected attributes. Includes demographic parity, equalized odds, and individual fairness metrics.
belongs to
quality attribute c

ML Monitorabilityni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ml-systems/quality-attributes#MLMonitorability

The degree to which an ML model's production behavior can be observed, measured, and alerted on. Includes prediction distribution monitoring, feature drift detection, data quality checks, and serving performance tracking.
belongs to
quality attribute c

Model Freshnessni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ml-systems/quality-attributes#ModelFreshness

The degree to which an ML model reflects current real-world conditions. Inversely related to model staleness — a model trained on old data may not capture recent distribution shifts.
belongs to
quality attribute c

Reproducibilityni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ml-systems/quality-attributes#Reproducibility

The degree to which an ML experiment or training run can be exactly reproduced given the same data, code, and configuration. Requires versioning of data, code, hyperparameters, and environment.
belongs to
quality attribute c

Trustworthinessni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ml-systems/quality-attributes#Trustworthiness

The degree to which stakeholders can rely on an ML system's predictions and behavior. A composite quality attribute encompassing explainability, fairness, robustness, privacy, and accountability.
belongs to
quality attribute c

Legend back to ToC

ni: Named Individuals

Acknowledgments back to ToC

The authors would like to thank Silvio Peroni for developing LODE, a Live OWL Documentation Environment, which is used for representing the Cross Referencing Section of this document and Daniel Garijo for developing Widoco, the program used to create the template used in this documentation.