Topic-specific policies
ISO/IEC 27090

Search this site

ISMS templates

< Previous standard      ^ Up a level ^      Next standard >


ISO/IEC 27090 — Cybersecurity — Artificial Intelligence — Guidance for addressing security threats and failures in artificial intelligence systems [DRAFT]





The proliferation of ‘smart systems’ means ever greater reliance on automation: computers are making decisions and reacting/responding to situations that would previously have required human beings. Currently, however, the smarts are limited, so the systems don’t always react as they should.


Scope of the standard

The standard will guide organisations on addressing security threats and failures in artificial intelligence (AI) systems. It will:

  • Help organisations better understand the consequences of security threats to AI systems, throughout their lifecycle; and
  • Explain how to detect and mitigate such threats.


Content of the standard

Sept update The 2nd WD outlines the following ‘threats to AI systems’:

  • Poisoning (attacks on the system and/or data integrity);
  • Evasion (deliberately misleading the AI algorithms with carefully-crafted inputs);
  • Membership inference and model inversion (methods to distinguish [and potentially manipulate] the data points used in training the system);
  • Model stealing (theft of the valuable intellectual property in an AI system/model);
  • Model misuse (using the AI model for unintended purposes e.g. through insecure APIs);
  • Sensor spoofing (? feeding false input data ?);
  • Scaling (??);
  • Adversarial (? other hacks ?).



The project developing the standard started in 2022.

Sept update It is at 2nd Working Draft stage.


Personal notes

As usual with ‘cybersecurity’, the project proposal focused on active, deliberate, malicious, focused attacks on AI systems by motivated and capable adversaries, disregarding the possibility of accidental and natural threats and threats from within i.e. internal/insider threats. Even within the area of concern, I see no overt mention of ‘robot wars’ i.e. AI systems attacking other AI systems. Scary stuff, if decades of science fiction are anything to go by.

Detecting ‘threats’ (which I think means impending or in-progress attacks) is seen as an important area for the standard, implying that security controls cannot respond to undetected attacks ... which may be true for active responses but not passive, general purpose controls.

I’m curious about the imprecise use of terminology too. Are ‘security failures’ vulnerabilities, control failures, events or incidents maybe? Are ‘threats’ information risks, or threat agents, or incidents, or something else?

However, the rapid pace of change in this field is acknowledged, with the implication that this standard will provide a basic starting point, a foundation for other standards and updates as the field matures.


< Previous standard      ^ Up a level ^      Next standard >

Copyright © 2022 IsecT Ltd.