ETSI is pleased to announce the creation of a new Industry Specification Group on Securing Artificial Intelligence (ISG SAI). The group will develop technical specifications to mitigate threats arising from the deployment of AI throughout multiple ICT-related industries.
This includes threats to artificial intelligence systems from both conventional sources and other AIs. The ETSI Securing Artificial Intelligence group was initiated to anticipate that autonomous mechanical and computing entities may make decisions that act against the relying parties either by design or as a result of malicious intent.
Conventional cycle of networks risk analysis
The conventional cycle of networks risk analysis and countermeasure deployment represented by the Identify-Protect-Detect-Respond cycle needs to be re-assessed when an autonomous machine is involved.
The intent of the ISG SAI is therefore to address 3 aspects of artificial intelligence in the standards domain:
- Securing AI from attack e.g. where AI is a component in the system that needs defending
- Mitigating against AI e.g. where AI is the ‘problem’ or is used to improve and enhance other more conventional attack vectors
- Using AI to enhance security measures against attack from other things e.g. AI is part of the ‘solution’ or is used to improve and enhance more conventional countermeasures.
Developing technical knowledge
Three main activities will be undertaken and confirmed during the first meeting of the group
The purpose of the ETSI ISG SAI is to develop the technical knowledge that acts as a baseline in ensuring that artificial intelligence is secure. Stakeholders impacted by the activity of ETSI’s group include end users, manufacturers, operators and governments. Three main activities will be undertaken and confirmed during the first meeting of the group.
Currently, there is no common understanding of what constitutes an attack on AI and how it might be created, hosted and propagated. The work to be undertaken here will seek to define what would be considered an AI threat and how it might differ from threats to traditional systems. Hence, the AI Threat Ontology specification seeks to align terminology across the different stakeholders and multiple industries.
Prioritising potential AI threats
ETSI specifications will define what is meant by these terms in the context of cyber and physical security and with a narrative that should be readily accessible to all. This threat ontology will address AI as system, attacker and defence.
Data is a critical component in the development of AI systems, both raw data, and information
This specification will be modelled on the ETSI GS NFV-SEC 001 ‘Security Problem Statement’ which has been highly influential in guiding the scope of ETSI NFV and enabling ‘security by design’ for NFV infrastructures. It will define and prioritise potential AI threats along with recommended actions. The recommendations contained in this specification will be used to define the scope and timescales for the follow-up work. Data is a critical component in the development of AI systems, both raw data, and information and feedback from other AI systems and humans in the loop.
Developing data sharing protocols
However, access to suitable data is often limited, causing a need to resort to less suitable sources of data. Compromising the integrity of data has been demonstrated to be a viable attack vector against an AI system.
This report will summarise the methods currently used to source data for training AI, along with a review of existing initiatives for developing data sharing protocols and analyse requirements for standards for ensuring integrity in the shared data, information and feedback, as well as the confidentiality of these. The founding members of the new ETSI group include BT, Cadzow Communications, Huawei Technologies, NCSC and Telefónica. The first meeting of ISG SAI will be held in Sophia Antipolis on 23 October. Come and join to shape the future path for secure artificial intelligence!