AIRO: an Ontology for Representing AI Risks based on the Proposed EU AI Act and ISO Risk Management Standards

The growing number of incidents caused by (mis)using Artificial Intelligence (AI) is a matter of concern for governments, organisations and the public. To control the harmful impacts of AI, multiple efforts are being taken all around the world from guidelines promoting trustworthy development and use, to standards for managing risks and regulatory frameworks. Amongst these efforts, the first-ever AI regulation proposed by the European Commission, known as the AI Act, is prominent as it takes a risk-oriented approach towards regulating development and use of AI within systems. In this paper, we present the AI Risk Ontology (AIRO) for expressing information associated with (AI) risk based on the requirements of the proposed AI Act and ISO 31000 series of standards. AIRO assists stakeholders in maintaining and documenting risk information, performing impact assessments, and assist with legal compliance. To show its usefulness, we model existing real-world use-cases from the AIAAIC repository of AI-related risks and produce documentation for EU's proposed AI Act.

Speakers: