Artificial intelligence Bologna-Oxford axis: the first "manual" against ethical risks

The new auditing methodology, called Capai, is unique in the world: it aims to help companies adapt to European legislation, reducing the possibility that AI systems cause damage to individuals, society or the environment 24 Mar 2022 Veronica Balocco

From the joint work of scholars from the University of Bologna and the University of Oxford, a sort of instruction manual is born (DOWNLOAD THE FULL REPORT HERE) to warn companies of the ethical risks associated with the use of artificial intelligence. A 'manual', produced by the two Universities, which aims to "protect people, society, and the environment from the risks of AI, offering organizations a new approach to develop and manage these technologies in line with future European regulations ". The new 'auditing' methodology is "unique in the world" and was born precisely as a response to the proposal for a regulation on artificial intelligence advanced last year by the EU, a document that aims to coordinate the European approach with respect to the human and ethical implications of 'Ia. Objective: to prevent or reduce risks Developed by a group of experts from the Center for Digital Ethics (Cede) active in the Department of Legal Sciences of the University of Bologna, the Saïd Business School and the Oxford Internet Institute of the University of Oxford, 'Capai', acronym of 'Conformity assessment procedure for AI', will help companies to "adapt to future European legislation - explains Alma Mater - by preventing or in any case reducing the risk that artificial intelligence systems behave in an unethical way and cause damage to individuals individuals, communities, society more generally and the environment ". The procedure will be useful, for example, to evaluate artificial intelligence systems "in order to prevent privacy violations and incorrect use of data". In addition, this new tool "will offer support for explaining the results obtained from AI systems and for the development and activation of reliable and compliant systems" with European dictates. Companies will therefore be able to evaluate their artificial intelligence systems, also showing customers how they are managed and used. This evaluation sheet takes into account the objectives of the AI system, the organizational values that support it and the data that were used to activate it. It also includes information about the system manager, along with his contact details, in case customers want to get in touch with concerns or questions. "Potentially dangerous" technology "Artificial intelligence, in its many forms, is designed as a tool for the benefit of humanity and the environment. It is an extremely powerful technology, but it can also become dangerous - explains Luciano Floridi, director of the Center for digital ethics (Cede) of the Alma Mater in Bologna and lecturer at Oxford - for this reason we have developed an audit methodology capable of to verify that AI systems are in line with European legislation and respect ethical principles. In this way we can help ensure the development and correct use of these technologies ". To fine-tune the path, adds Matthias Holweg, of the Said Business School and co-author of the project, “we first of all created the most complete database on errors produced by AI systems. Starting from this database, reporting the most common problems and describing in detail the best 'best practices', we then created a unique 'toolbox' in the world to help organizations develop and manage artificial intelligence systems in accordance with the law. , technically solid and respectful of ethical principles. Our hope is that 'Capai' can become a standard process for all AI systems and thus prevent the many ethical problems that have arisen to date ".