< Previous standard ^ Up a level ^ Next standard >
ISO/IEC 27090 — Cybersecurity — Artificial Intelligence — Guidance for addressing security threats to artificial intelligence systems [DRAFT]
Abstract
“This document provides guidance for organizations to address security threats and failures in artificial intelligence (AI) systems. The guidance in this document aims to provide information to organizations to help them better understand the consequences of security threats to AI systems, throughout their lifecycle, and descriptions of how to detect and mitigate such threats.” [Source: ISO/IEC JTC 1/SC 27 SD11 July 2024]
Introduction
The proliferation of ‘smart systems’ means ever greater reliance on automation: computers are making decisions and reacting/responding to situations that would previously have required human beings. Currently, however, the smarts are limited, so the systems don’t always react as they should.
Scope of the standard
The standard will guide organisations on addressing security threats to Artificial Intelligence systems. It will:
- Help organisations better understand the consequences of security threats to AI systems, throughout their lifecycle; and
- Explain how to detect and mitigate such threats.
Content of the standard
The standard will cover at least a dozen threats such as:
- Poisoning - data and model poisoning e.g. deliberately injecting false information to mislead and hence harm a competitor’s AI system;
- Evasion - deliberately misleading the AI algorithms using carefully-crafted training inputs;
- Membership inference and model inversion - methods to distinguish [and potentially manipulate] the data points used in training the system;
- Model stealing - theft of the valuable intellectual property in a trained AI system/model.
For each threat, the standard will offer about a page of advice:
- Describing the threat;
- Discussing the potential consequences of an attack;
- Explaining how to detect and mitigate attacks.
An extensive list of references will direct readers to further information including relevant academic research and more pragmatic advice, including other standards.
Status
The project started in 2022.
The standard is fast approaching Draft International Standard stage. It remains on-track for publication during 2025 (hopefully!).
Personal comments
Imprecise/unclear use of terminology in the drafts will be disappointing if it persists in the published standard. Are ‘security failures’ vulnerabilities, control failures, events or incidents maybe? Are ‘threats’ attacks, information risks, threat agents, incidents or something else?
Detecting ‘threats’ (which generally refers to impending or in-progress attacks) is seen as a focal point for the standard, hinting that security controls cannot respond to undetected attacks ... which may be generally true for active responses but not for passive, general purpose controls.
As usual with ‘cybersecurity’, the proposal and drafts focused on active, deliberate, malicious, focused attacks on AI systems by motivated and capable adversaries, disregarding the possibility of natural and accidental threats such as design flaws and bugs, and threats from within i.e. insider threats.
The standard addresses ‘threats’ (attacks) to AI that are of concern to the AI system owner, rather than threats involving AI that are of concern to its users or to third parties e.g. hackers and spammers misusing AI systems to learn new malevolent techniques. The rapid proliferation (explosion?) of publicly-accessible AI systems during 2023 put a rather different spin on this area.
The scope excludes ‘robot wars’ where AI systems are used to attack other AI systems. Scary stuff, if decades of science fiction and cinema blockbusters are anything to go by.
The potentially significant value of AI systems in identifying, evaluating and responding to information risks and security incidents is also out of scope of this standard: the whole thing is quite pessimistic, focusing on the negatives.
However, the hectic pace of progress in the AI field is a factor: this standard will provide a starting point, a foundation for further AI security standards and updates as the field matures.
< Previous standard ^ Up a level ^ Next standard >
|