Artificial intelligence, new rules to limit risks

A study by Bcg highlights a widespread need for rules for responsible use of AI in companies. Many, however, are not ready to handle them.

Posted on 03 May 2023 by Valentina Bernocco

 There is still talk of generative artificial intelligence, while Europe is catching up on the AI Act and while in Hollywood there is the protest of the screenwriters worried about losing their jobs because of the "plagiarism machine"applications that in the future could write in their place texts for cinema and TV. The recent "green light" of the Italian Privacy Authority to OpenAI is a signal of reasonableness, it is the conscious need not to be able to censor technologies that are taking hold all over the world and in many markets.

Faced with the amazing capabilities of ChatGPT, Midjourney, and other generative AI applications, everywhere there is a need for clear rules that put boundaries and protect security, privacy, and intellectual property. And if even one of the fathers of deep learning, the computer scientist Geoffrey Hinton, fears future wars triggered by decisions taken by man, deny the existence of any risk no longer seems possible. Meanwhile in Washington, Vice President Kamala Harris has summoned to the White House, to talk about artificial intelligence, the managing directors of Microsoft and Alphabet and the two (former) startups on which these companies have invested, respectively OpenAI and Anthropic.

The need for rules on the use of AI also emerges from a study by Bcg, "Digital Acceleration Index", conducted internationally on 2,700 company executives. Among Europeans surveyed, 35% of executives said that new rules are necessary to adopt AI in an ethical and responsible way, and in Italy the percentage rises to 49%. In addition, 34% of the Italian companies included in the sample already have a figure responsible for artificial intelligence (chief AI).

 Of course the scenario is immature and many companies still do not know how to move in the face of the latest technological and regulatory developments. Only 28% of the sample total said their organization is ready to handle the new AI rules, but there are areas of maturity. In Italy, 89% of managers in the energy sector say that the company is ready, while in the financial sector the share is 87.5%, in IT is 83%.

The European AI Act, currently being passed, sets limits and obligations on artificial intelligence applications based on the potential risks, privacy breaches and discrimination that could result (Mass biometric surveillance and credit assessment based on social scoring, for example, are prohibited). For companies that transgress are provided for financial penalties up to a maximum of 6% of annual revenues.

(rawpixel.com image on Freepik; starline opening image on Freepik)

According to Bcg analysts, a good way to deal with the problem is to implement a Responsible Artificial Intelligence (RAI) program within the company, based on principles of accountability, transparency, privacy, security, equity inclusion. These values must be preserved both in the training of algorithms and in the development of applications and in their use.

"The RAI initiatives can be the reference framework for both those who create artificial intelligence tools and for those who use them, helping both parties to deal positively with regulation", commented Enzo Barba, partner of Bcg X, a division of the AI consulting firm. "This aspect is particularly relevant for Italian companies that are operating in a regulatory context very attentive to the dynamics of development of new technologies and their impact on user privacy".

It has to be said that major IT and digital giants, such as Microsoft, Alphabet or Accenture, have long defined their own guidelines on responsible AI. However, the scenario is rapidly changing, as are the risks in the face of the latest developments in large language models and their rapid spread, thanks to free apps for the end user and services available to developers via the cloud.

Tags: corporate responsability, artificial intelligence, openai, chatGPT, Ai generativa

You might also be interested

https://www.ictbusiness.it/cont/news/intelligenza-artificiale-nuove-regole-per-arginare-i-rischi/47720/1.html#.ZFpHRHZBzIU

Leave a Reply

Your email address will not be published. Required fields are marked *