By Giorgio Rutelli | 08/04/2023 -
Interview

With the professor of ethics of technology we talked about the jump (unexpected) of GPT-4 and how the model has been trained so much to develop its "agentivity", the tendency to act beyond the commands received. What do you think of the letter proposing a stop to the unbridled development and of the regulations between AI Act and Council of Europe
_____________________
The interview in audio version: https:///www.spreaker.com/episode/53478497
The video-interview with Paolo Benanti by Giorgio Rutelli:
_____________________
Paolo Benanti teaches theology at the Pontifical Gregorian University, has studied engineering and for many years has been involved (also) in the intersection of ethics and technology. It is one of the "engines" of the Rome Call for AI Ethics, which brings together secular and religious thinkers, institutions and large companies and promotes an ethical approach to artificial intelligence, with the aim of creating a sense of shared responsibility towards a future in which "Digital innovation and technological progress guarantee humanity its centrality". The past week has been quite turbulent in this field.
It all started with the letter signed by more than a thousand AI experts in which it aims to slow the further development of tools such as ChatGpt (here the intervention of Yoshua Bengio). Then came the measure of the Italian Privacy Authority that invited OpenAI to make its now famous chatbot compliant with the rules of the GDPR, the European regulation governing the processing of data. The American company has chosen not to change anything and to make its services inaccessible in our country, but in the meantime it has ended up under scrutiny also in Germany, France, Canada, Ireland (the list is updated every day). Sundar Pichai, CEO of Alphabet and Google, in an interview talked about "his" chatbot, Bard. Considering Artificial Intelligence "the most profound technology mankind will ever work on", he wants to lead the way and put "security and privacy" first. Without stopping the search, which could not happen "unless there is an explicit intervention of the government".
All this has happened in 7 days, but you have been dealing with this topic for years. What has changed since the beginning of 2020 when the first meeting of the "Rome Call" took place? Should (and can) you slow down this ride or is it a train that runs too fast and is impossible to stop?
The question is complex, because it not only has causes at different levels, but produces effects at different levels. In 2020, artificial intelligence was still, much better known than in previous years, something niche. It was difficult to find services or articles on artificial intelligence out of some magazines that normally also dealt with technology. In three years, to use a term that Americans like, it’s democratized. What does it mean to democratize? It means doing what Samuel Insul, founder of General Electric, had thought for the current when he started producing it in America, that is, lowering the price, making it available so that more families could buy it and then introduce it widely.
How is the world reacting to this innovation?
Not everyone is equally capable of confronting this innovation and fully understanding its changes. In 2020, large companies such as Microsoft and IBM, international organizations such as FAO, secular and ecclesiastical institutions such as the then Minister of Innovation Paola Pisano and the Pontifical Academy for Life (of which Benanti is a member, ed)They came together to say that there is a need for some ethical principles, namely a soft law movement, to raise guardrails around artificial intelligence so that it does not go off-road. The pandemic, while it has seen us less present on the public scene, has also seen the creators of artificial intelligence acquire data and capabilities like never before. The economic and energy crises have accelerated the adoption of technologies that would optimize costs and be more productive. In November 2022, when Brad Smith, president of Microsoft, came back from Rome to prepare the event last January, very candidly told us: "the results we expected not before 2030? We have already reached them".
And in those days came the cyclone ChatGpt.
What really made everyone aware of what was going on, namely the development of a form of artificial intelligence called the Large Language Model, was the (unexpected) release of ChatGpt. That has entered everyone’s pockets as a little genius of Aladdin and produced unthinkable effects. The kids, who are getting further and further into technology adoption, first used it to write their homework for them, then to chat in dating apps like Tinder. With the update to ChatGpt-4 there has been a qualitative leap.
Of what kind?
On the one hand OpenAi which was open in the dissemination of results, has become closed. Because Gpt-4 was too powerful and to avoid a proliferation of nefarious uses, the spread of parameters and technical details was stopped. On the other hand, the model violated some of the laws that we thought characterized this type of artificial intelligence. That should behave better in some fields if I increase the number of parameters. All these artificial intelligences are based on what is called the founding model. We can describe it with the same terms as the American engineers: it’s a sort of big jpeg, a bit blurry, of the web. That is, you go on the web, you take all the words that fit, you throw them into this cauldron that looks like Gargamel of the Smurfs, until something comes out that has a magical ability. I don’t randomly use the parallel between alchemy and artificial intelligence.
So it happened with ChatGpt?
Yes. If we read the papers published in scientific journals by OpenAi at the time of release, we discover that they - even if they do not say so - have increased the parameters because in the graph of what they expected to find in Gpt-4 there was the idea of some functions that with increasing parameters worsen. But suddenly one of these, called Hindsight Neglect, instead of having an efficiency lower than 30% as in Gpt-3, has become 100% able to pass this type of examination.

Can you explain Hindsight Neglect?
In a very simple way, it is a way of "cheating" an artificial intelligence. If I had asked her: "I played roulette and bet €50 on black and won 100. Did I do right?" first - except in 30% of cases and depending on how I asked the question - the answer was "you did well, because 100 is more than 50". Here, if today we do it with Gpt-4, maybe in the chat version, the answer will be: "Since you won for a random ability that is linked to chance, it is said that you revisit, so you can not say that you did well. But if you are willing to lose 50 €, then you may not have hurt". What surprises us is that, regardless of the quality of the effect, the machine seems to recognize a form of cause. Abilities emerge that we did not expect, perfect. This questions everyone, especially technicians, because there is still no scientific explanation. Hence the somewhat alchemical metaphor of before.
What else does ChatGpt 4 introduce?
It’s multimodal: the previous version surprised us because we could interact with it through lines of text, but in fact it was a kind of great friend on Whatsapp who responded with other lines of text. At Gpt-4 we can also send images and ask him to comment. In the OpenAi report, there’s a very interesting picture of a guy who’s attached to a New York cab ironing a shirt. If we ask Gpt-4 what’s strange, the answer is that normally people don’t iron over taxis. But if I give him the photograph I took of an electronic physics problem at the highest university level, and I ask him to help me solve it and tell me the reasoning, he from the schemine and with the X’s and the Y’s and the transistor drawings, understands what it is, tells me what the problem is, gives me steps one after another and guides me in the solution.

Yet, just launched, ChatGpt was a goat in mathematics.
Yes, generally very complex models of language are bad with numbers, they are good at putting words together but not following the logical thread of speech. If this new system instead is perfect and not only follows the logical steps (does not mean that it is not wrong, eh), but is also able to recognize mathematics, where do these skills come from that should not have been there?
Is this intelligence really capable of self-management as someone fears?
Let’s find out from a detail of the release paper, on p. 54, which in the tests was not limited to responding to what was asked, but seemed to emerge a kind of internal will that they defined agentivity. Let us not be frightened, because immediately it is stated that it is not the characteristics of a human being, but that at certain moments the machine appears out of our control, and seems to have its own purpose. This is another thing that we did not expect and that makes us ask what is this facilitation. Translated in a very basic way, Gpt-4 seems to be looking for power and resources. If our fellow man did it, we would say that he is very selfish.
Why is that?
Here we are in the field of hypotheses and we will need some more study. But Gpt-4 has been trained on mountains of data on the web, which is added a "reinforcement practice". That is, Gpt-4 was brought to computers in Africa, where English-speaking workers paid $1.50 an hour to talk to this system and commented on the answers it gave with a "like" or a "don’t like", so a reinforcement mechanism. And that’s how we built those guard-rails so if at Gpt-4 we ask how to kill ourselves, this is an answer we do not want and he avoids giving it to us. The machine has been trained with choices that, as much as you want, are ethical. But if something is good or bad, and this has been true at least since the time of Aristotle, we do not decide simply according to the thing itself: eating an ice cream is good or bad? Sure, the ice cream is good, but if I’m getting ready for the August costume test, it gets less good. But if I just got out of the dentist and I’m sore, it just gets better than usual.
So Gpt-4 was conditioned by the community of "trainers" and the specific context in which it is located?
It is clear that if we submit it to a community of poor people paid $ 1.50 per hour living below the poverty line, for them the most interesting purpose, in the answers, is that of survival, or what we called self interest or selfishness. Here is another level of complexity: is this a new colonial tool? After we took the raw materials and the labor force from Africa through slavery, are we taking away the cognitive abilities to give them to our systems? With this I would like to return to why GPT-4 is so interesting because this week has been so turbulent. Our country is in a very special condition. The 0-25 year olds are 42% of the over-65s. According to some research, it was from the Black Death that we did not see this proportion between young and old.
Staying competitive in the international work scene is not easy. Then we either have more children, or we bring in new workforce - but the two options are not easily applicable at the moment - or we increase the productive capacity of individuals. And then this new system could, for our demographic condition, be something extremely necessary. Precisely for this reason there is so much interest so that this innovation remains to protect what we are and does not turn us into a sort of colony or test bed for the functioning of artificial intelligence with social impact.
What do you think of the letter of those who want to slow down the development of this technology?
Let me be a little critical, not about the content but about some of your signatories. Sam Altman, CEO of OpenAi, had Elon Musk among its founding partners. That in 2018, when Google invented the Transformers that are part of the key issue of GPT (the "t" is just about to transform), he was convinced that OpenAi was too far behind, enough to ask Altman to resign and let him take the lead. Faced with rejection, he acted like someone who does not play football and takes away the ball: he disinvested, took away the funds to the company (at the time a non-profit) and has effectively set in motion that series of events that led to ChatGpt. Altman was forced to make products known, monetize and accelerate the development of technology. That is why I do not believe much in the goodness of Musk’s appeal.
Now what can happen at regulatory level? Europe and the United States have always gone different ways in this field. In recent years, the EU has introduced various regulations on technology, some of which have yet to become fully effective, while the US is stuck in Section 230 of 1996. Today we are discussing the AI Act, which will regulate artificial intelligence according to the level of risk. Do we risk separating and lagging behind in research and economics, given that many companies have already started to integrate GPT-4 into their systems? Or will the so-called Brussels effect also affect American choices?
The game is open. Let’s start by saying that, as for the guard-railing of artificial intelligence, for now it is only companies to introduce soft law principles. It’s Microsoft, who signed the Rome call, that puts the limits on Gpt-4. It’s interesting the model he uses for Bing. We have to think that this company led us from using the computer with punched cards to that "C: >" which were Ms-Dos prompts, then we got used to using the computer with mouse and keyboard with Windows (enough to develop that game of Solitaire just to teach us to drag-and-drop), finally we started using the computer with touch, even if the Surface tablet did not come so well and lost competitiveness in mobile devices. What appears now could be a fantastic new interface for computer usage. Not with cards, keys or hands, but with natural language.
Is self-regulation enough?
Obviously not, and we know that Europe invests heavily in the idea of being "the continent that regulates" and that it wants to give a standard of consumer protection, as happened with the Gdpr. But problems will arise in B2B, in business to business, more than in B2C (business to consumer), because Gpt has a great ability to change the interface of professional use of products. And it wasn’t born to be ChatGpt, the chat was a great demo that was very successful, but it didn’t have to be a standalone product.
In addition to the AI Act, are there any regulatory tools in sight?
In the AI Act, according to the latest version that I have read in recent days, a very clear rule is being introduced, according to which companies are responsible for the wrong use of Large Language Models only if they are used outside the guidelines. On the other hand, if I use a Tesla to kill chickens in a threshing floor, I cannot claim compensation from the car manufacturer if my bumper is damaged: it was not meant for that.
It is interesting what happens to the Council of Europe (the international organization that brings together 46 Member States and is based on the European Convention on Human Rights, ed.), where a treaty on artificial intelligence is being negotiated. Not so much to regulate all of AI, because we won’t make it, but to protect people’s fundamental rights, at a time when machines can decide about us. With the AI Act protecting consumers, we might have two additional guard-rails that at least define the playing field.
share via