Communs démocratiques
  • Research

"Democratic Commons": Ethical AI for citizen debate on shared democratic values

Interview with François Yvon

With the European elections on everyone's minds, François Yvon, Director of Research at the Institute for Intelligent Systems and Robotics, talks to us about the "Communs Démocratiques" project which is about our shared democratic values. Spearheaded by the Make.org platform, Sciences Po, Sorbonne University and the CNRS, this innovative project, announced at the Vivatech trade show, aims to harness the potential of generative AI to strengthen democratic processes, in a world facing an unprecedented crisis of institutional trust and growing informational warfare.

Tell us about the "Communs Démocratiques" project, its objectives and challenges.

François Yvon: The "Communs Démocratiques" project was born out of the need to strengthen democracy in the digital age, particularly in the face of the challenges posed by online information and disinformation. It is a multidisciplinary research initiative aimed at developing open-source generative AI solutions in the context of online participatory debates.

Science Po, CNRS and Sorbonne University used the Make.org hosting platform, because they wanted to consider the various uses of AI in interventions that have to be carried out semi-automatically due to the scale of contributions and participants: translating contributions to multilingual debates, creating summaries or moderating to detect inappropriate content.

What are the issues involved in using AI in these debates?

F. Y.:

The question is whether AI tools can bias participation, for example by incorrectly translating or summarizing contributions. It is crucial to define what is expected of these systems, and to identify, measure and correct biases. Should they reproduce all opinions equally, or according to their importance? What moderation rules should they follow? Should AI report only the meaning of a proposition, or also the emotion and engagement of the user?

This is not a new topic. Debates and interventions on social networks are regulated by algorithms. And for the past 20 years, research has been trying to find the best way of using automatic processing while ensuring that it does not undermine democracy.

Why is it important to take a multidisciplinary approach to assessing and countering the potential biases of language models?

F. Y.: The "Communs Démocratiques" (Shared Democracy) program will bring together more than 50 researchers and engineers. This unprecedented multidisciplinary approach combines advanced expertise in data science with in-depth knowledge of the humanities and social sciences. Sciences Po's Cevipof and Medialab laboratories will be helping to define what would be fair and acceptable AI behavior as a moderator or reporter of debates, while at Sorbonne University we'll be focusing on understanding and improving the algorithms used by major language models. We will be working on 3 main uses for the algorithms: moderating and summarizing debates; helping users to formulate and express their opinions appropriately; and multilingual translation.

The program is supported by the Make.org platform and is  backed by five leading partners in ethical AI: Hugging Face, Aspen Institute, Mozilla.ai, Project Liberty Institute and Genci. The industrial partners will enable us to test our models in real environments and conduct large-scale social science experiments, with participants recruited on the platform ready to take part in tests. This collaboration is essential to compare the effectiveness of the models and ensure that they meet users' needs.

Why develop open source solutions?

F. Y.: Supported by France 2030's Communs Numériques program and Bpifrance, the project is funded for two years and raises the question of what we put at the service of the community. Three theses will be funded on the subjects of moderation, assistance in expressing opinions, and multilingual translation. In addition to the models that will be available as open source, it is above all the underlying principles that need to be shared, disseminated and recognized in order to guarantee ethical and transparent use of the technologies developed.

In what other contexts can the results of this project be reused?

F. Y.: The results of this project can be applied in many other settings. For example, they could be used to facilitate interactions between citizens and their political representatives, such as in public consultations or debates on draft legislation. The technologies developed can also be adapted for other citizen participation platforms, enabling web users to ask questions not only about individuals, but also about legal or legislative bodies. What's more, the principles and models developed can serve as a basis for other initiatives aimed at regulating AI in various social and political contexts, thus guaranteeing their ethics and transparency.