Topic-specific policies
ISO/IEC 27090

Search this site

ISMS templates

< Previous standard      ^ Up a level ^      Next standard >


ISO/IEC AWI 27090 — Cybersecurity — Artificial Intelligence — Guidance for addressing security threats to artificial intelligence systems [DRAFT]


Abstract Status update January

“This document provides guidance for organizations to address security threats and failures specific to artificial intelligence (AI) systems. The guidance in this document aims to provide information to organizations to help them better understand the consequences of security threats specific to AI systems, throughout their lifecycle, and descriptions of how to detect and mitigate such threats.”
[Source: notes on a working draft - likely to change]


The proliferation of ‘smart systems’ means ever greater reliance on automation: computers are making decisions and reacting/responding to situations that would previously have required human beings. Currently, however, the smarts are limited, so the systems don’t always react as they should.


Scope of the standard

The standard will guide organisations on addressing security threats to Artificial Intelligence systems. It will:

  • Help organisations better understand the consequences of security threats to AI systems, throughout their lifecycle; and
  • Explain how to detect and mitigate such threats.


Content of the standard

The 3rd Working Draft outlines the following threats:

  • Poisoning - attacks on the system and/or data integrity e.g. feeding false information to mislead and hence harm a competitor’s AI system;
  • Evasion - deliberately misleading the AI algorithms using carefully-crafted training inputs;
  • Membership inference and model inversion - methods to distinguish [and potentially manipulate] the data points used in training the system;
  • Model stealing - theft of the valuable intellectual property in a trained AI system/model;
  • Model misuse - using the AI model for unintended purposes e.g. exploiting insecure APIs;
  • Supply chain attack - e.g. backdoored AI/ML functions;
  • Neural net reprogramming - hacking neural-network based ML models for nefarious purposes;
  • Model misuse - exploiting additional information produced by an AI system, peripheral to its intended output;
  • Sensor spoofing - faking physical inputs (such as a robot car’s speedometer) to manipulate a sensor-based AI system (exceed the speed limits);
  • And maybe others.



The project started in 2022.

Noted March 2023 It is at 3rd Working Draft stage.


Personal notes

Imprecise/unclear use of terminology in the drafts will be disappointing if it persists in the published standard. Are ‘security failures’ vulnerabilities, control failures, events or incidents maybe? Are ‘threats’ information risks, or threat agents, or incidents, or something else?

Detecting ‘threats’ (which I think means impending or in-progress attacks) is seen as an important area for the standard, implying that security controls cannot respond to undetected attacks ... which may be true for active responses but not passive, general purpose controls.

As usual with ‘cybersecurity’, the proposal focused on active, deliberate, malicious, focused attacks on AI systems by motivated and capable adversaries, disregarding the possibility of accidental and natural threats and threats from within i.e. insider threats.

Noted March 2023 The standard addresses ‘threats’ (risks) to AI that are of concern to the AI system owner, rather than threats involving AI of concern to its users or to third parties e.g. hackers and spammers misusing ChatGPT to learn new techniques. Publicly-accessible systems based on GPT3 etc. put a rather different spin on this area.

Even within the stated scope, I see no mention of ‘robot wars’ where AI systems are used to attack other AI systems. Scary stuff, if decades of science fiction are anything to go by.

Noted March 2023 The value of AI/ML systems in identifying, evaluating and responding to information risks and security incidents is evidently out of scope of this standard: the whole thing is quite pessimistic, focusing on the negatives.

However, the hectic pace of progress in the AI/ML field is a factor: this standard will provide a starting point, a foundation for further AI/ML security standards and updates as the field matures.


< Previous standard      ^ Up a level ^      Next standard >

Copyright © 2023 IsecT Ltd.