Aller au contenu principal
Fermer

Standards for and by AI

Publié le : 27 février 2025 à 19:33
Dernière mise à jour : 16 juillet 2025 à 11:50
Par Yves Charmont

What role can public institutions play in defining standards for AI? Questioned during a break in the recent days devoted to examining the OECD’s public communication network, Andréa Baronchelli, a professor at City University London and a specialist on the subject, reminded listeners of the role of the law and empirical practices in constructing a normative framework. However he also underscored the importance for organisations of adopting standards suited to the issues of transparency, responsiveness and customisation, which ensure an ethical and effective use of AI.

Dans les mêmes thématiques :

During the study days organised by the OECD’s public communication network and attended by Cap’Com in Paris on 25 and 26 February 2025, the theme of multilateral dialogue, building resilience and stepping up cooperation was addressed in a lecture focused on artificial intelligence. It was an opportunity for Point commun to hear Andréa Baronchelli, a professor at City University in London, who is one of the specialists on the subject.

Point in common: How can we establish practices and even standards in the relationships between a profession and AI?

Andréa Baronchelli: Firstly, there is a framework laid down by law. Institutions such as governments can impose practices or attempt to regulate these practices, and encourage the population to adopt a particular behaviour. Secondly, there is a set of common practices that was developed in the field and that stems from actual expectations and needs. Next, a standard will gradually be produced by people as they endeavour to coordinate their actions. They may not set out to codify a shared behaviour, but it will codify itself. That can be seen everywhere. Just imagine all the unwritten norms that exist. For example, we speak English in a multilingual meeting, even though no-one has told us we must speak English: it’s standard practice. Many professions generate standards in their field. And AI follows the same path, compartmentalising information by discipline. Think of university: you can define a set of rules for using AI within an organisation, so therefore within the university. While you are at that university, you will follow that particular rule, but if you go to work at another university, it will be a different set of rules. So where communication is concerned, there is a lot of leeway so that the standard practice in an organisation is to adopt AI in such and such a manner, and in so doing to codify a standard for the people who will be using it. The challenge will be to find common ground, i.e. a code that is widely accepted and understood by citizens.

Point in common: Is this empirical approach already sufficiently established for it to become a normative bedrock?

Andréa Baronchelli: To a large extent, yes. I’ll give you three examples.

First, a standard that clearly comes from outside and that the institutions (and in particular local authorities) can no longer ignore, namely round-the-clock responsiveness. Let me explain. This is a phenomenon that the Financial Times has aptly described as it relates to Gen Z and cryptocurrencies. Young people who deal in cryptocurrencies don’t understand that the markets are sometimes closed, i.e. that they are not operational around the clock.

Today, when people want to put questions to their public institution, they expect it to respond, even in the middle of the night.

And since the advent of chatbots, it’s exactly the same for dialogue with the public service. Chatbots are accessible all the time. Today, when people want to put questions to their public institution and need an answer, they expect it to respond, even in the middle of the night. The UK government is considering using chatbots with this in mind - an always-on service - and the same applies in the Estonian administration. AI can genuinely deliver added value by answering simple, basic questions at any time of the day or night. As a result, AI is rendering the notion of “office hours” increasingly obsolete. So there we have an example of how the norms are changing.

Another example of norms is targeted communication. In a world where we are constantly inundated with spam emails, we’d like a personalised form of communication. I myself live in a region and I’d like the public information that reaches me to be directly relevant to me. And that is a norm to which public institutions must adapt. Here too, AI can help them more effectively segment the public, for example, and tailor messages to specific categories to improve access to services and diversify the public concerned.

A final example: interaction as a norm. During our conference at the OECD this morning, there was a discussion of this interaction between AI and communication for participation and consultation purposes, i.e. its ability to take into account not only top-down communication but also bottom-up citizen input. For example, I live in London and a number of local surveys have been launched on the subject of building new skyscrapers. The system was not identified or used. And yet, if they had taken the time to measure the opinions voiced spontaneously on social media, the government would no doubt have understood that the majority of the population was saying: “No, we don’t want any new skyscapers”. However, there was no effective means of assessment in place, even though AI can easily sift through masses of e-mails and posts to reveal genuine shifts in public opinion. Shortly afterwards, construction work began, much to residents’ dismay. It would have been possible to swiftly sidestep this real failure to listen.

Point in common: Even so, as a public body, should we be looking into the norms of AI?

Andréa Baronchelli: Institutions must become aware of these new standards, otherwise they will increasingly be seen as inefficient and absent.

Point in common: What can institutions themselves generate in terms of norms?

Andréa Baronchelli: They need to make AI a team member, not a robotised decision-maker. At this conference, South Korea talked to us about the problems stemming from hallucinations. This risk can be reduced if we see AI as a colleague and not “someone” in charge of decision making. So we can have an AI that is available around the clock to answer basic, straightforward questions, but we can also train AI to reply to more complex questions by saying: “No, you’ll have to wait until office opening hours on Monday.” Otherwise, you run the risk of losing your audience’s trust.

Public institutions should take the lead regarding the choices made about AI.

Another norm will be transparency. Citizens must be told when a content item is AI-generated. And this is where public institutions can stand out from private institutions by being far more transparent.
Another necessity is to take the public’s expectations into account. Public institutions should take the lead regarding the choices made about AI, how to use it and how to introduce it.

A multilateral dialogue on public communication, organised by the OECD

The Organisation for Economic Co-operation and Development conducted a two-day work session on the theme of “Strengthening resilience and cooperation” on 25 and 26 February 2025 in Paris. Cap’Com, the French local government communication network, was also involved in this work session. The initial observation was weighty: “Managing the ongoing challenges of a complex information ecosystem is a tough responsibility for government communications staff. Public and media discourse is becoming increasingly polarised, depriving societies of the necessary space for constructive debate, while at the same time sowing mistrust and division. The bold solutions required to tackle climate change, foster inclusive growth and manage a volatile geopolitical landscape can seem beyond our reach.

And yet, “the proofs for boosting resilience and trust increasingly point to good communication as a powerful lever for positive change, alongside other actions”. The participants worked on ways to restore trust, at the initiative of the OECD’s Innovative, Digital and Open Government Division (Indigo) and the Harvard T.H. Chan School of Public Health, with backing from the NATO Science for Peace and Security Programme.

Read also:
Boosting cooperation among European communicators
Lire la suite
A symposium in Namur on artificial intelligence in communication
Lire la suite