< Previous standard ^ Up a level ^ Next standard >
ISO/IEC 27090 — Cybersecurity — Artificial Intelligence — Guidance for addressing security threats to artificial intelligence systems [DRAFT]
Abstract
“This document provides guidance for organizations to address security threats and failures in artificial intelligence (AI) systems. The guidance in this document aims to provide information to organizations to help them better understand the consequences of security threats to AI systems, throughout their lifecycle, and descriptions of how to detect and mitigate such threats.” [Source: ISO/IEC JTC 1/SC 27 SD11]
Introduction
The proliferation of ‘smart systems’ means ever greater reliance on automation: computers are making decisions and reacting/responding to situations that would previously have required human beings. Currently, however, the smarts are limited, so the systems don’t always react as they should.
Scope of the standard
The standard will guide organisations on addressing security threats to Artificial Intelligence systems. It will:
- Help organisations better understand the consequences of security threats to AI systems, throughout their lifecycle; and
- Explain how to detect and mitigate such threats.
Content of the standard
The 3rd Working Draft outlines threats such as:
- Poisoning - attacks on the system and/or data integrity e.g. feeding false information to mislead and hence harm a competitor’s AI system;
- Evasion - deliberately misleading the AI algorithms using carefully-crafted training inputs;
- Membership inference and model inversion - methods to distinguish [and potentially manipulate] the data points used in training the system;
- Model stealing - theft of the valuable intellectual property in a trained AI system/model;
Status
The project started in 2022.
The standard is at 4th Working Draft stage, progressing nicely. It is due to be published in 2025.
Personal notes
Imprecise/unclear use of terminology in the drafts will be disappointing if it persists in the published standard. Are ‘security failures’ vulnerabilities, control failures, events or incidents maybe? Are ‘threats’ information risks, or threat agents, or incidents, or something else?
Detecting ‘threats’ (which I think means impending or in-progress attacks) is seen as an important area for the standard, implying that security controls cannot respond to undetected attacks ... which may be true for active responses but not passive, general purpose controls.
As usual with ‘cybersecurity’, the proposal focused on active, deliberate, malicious, focused attacks on AI systems by motivated and capable adversaries, disregarding the possibility of accidental and natural threats and threats from within i.e. insider threats.
The standard addresses ‘threats’ (risks) to AI that are of concern to the AI system owner, rather than threats involving AI that are of concern to its users or to third parties e.g. hackers and spammers misusing ChatGPT to learn new techniques. Publicly-accessible systems based on GPT3 etc. put a rather different spin on this area.
Even within the stated scope, I see no mention of ‘robot wars’ where AI systems are used to attack other AI systems. Scary stuff, if decades of science fiction are anything to go by.
The value of AI/ML systems in identifying, evaluating and responding to information risks and security incidents is evidently out of scope of this standard: the whole thing is quite pessimistic, focusing on the negatives.
However, the hectic pace of progress in the AI/ML field is a factor: this standard will provide a starting point, a foundation for further AI/ML security standards and updates as the field matures.
< Previous standard ^ Up a level ^ Next standard >
|