The arrival of generative AI, accessible to the general public from the end of 2022, is a very good example. Generative AIs, such as ChatGPT, Midjourney or Sora, are AIs capable of generating text, images or videos using a textual instruction, better known as a prompt in English.
These tools are able to answer school homework questions, and therefore recent AIs are calling into question the methods of assessing knowledge and pushing teachers to integrate these new tools into their teaching practices. While these AIs, initially designed without educational aims, have truly disrupted the field of education, what about AIs specifically developed to improve learning and teaching processes?
Objectives of AI in education
AIs that are specifically designed to serve students, teachers, or institutions present different levels of maturity and deployment. With the objective of supporting learning and combating academic failure, they are mainly intended to personalize already structured courses. Their promise is to adapt to the pace and individual needs of each person. The difficulty must be scalable, without being too difficult to avoid dropping out, nor too easy to avoid becoming bored.
These AIs are generally made up of three fundamental components: modeling of the field of study, modeling of learners and modeling of the educational approach.
These three components allow the systems to present different content or exercises depending on the interactions with learners, while respecting a certain previously implemented educational approach.
Examples of AI designed for education
Research results in the field of education are starting to be deployed in systems used in France. For example, since 2019, the digital skills assessment and certification platform, PIX, has offered an adaptive algorithm available as open source and resulting from research work. Its users can train with questions and exercises whose difficulty adjusts according to the answers and then request certification of skills such as mastery of IT tools, online security, programming, and creation of digital content.
Furthermore, many EdTech companies position themselves as using AI. Six of them were notably winners of the Artificial Intelligence Innovation Partnership (P2IA), driven by the Ministry of National Education and Youth and aiming to provide elementary teachers ( first to third grades) AI-based educational assistants, helping them to personalize learning.
The P2IA particularly supports fundamental learning in French and mathematics, and the tools developed by these companies, including exercise recommendation systems and dashboards, are used nationally.
Other examples of AI for education are early dropout prediction systems. Profitable for hybrid courses (that is to say partly in class and partly online) and widely used in MOOCs (massive open online courses), these systems aim to provide as quickly as possible an intervention intended to encourage the learner to not drop out. This can be in the form of automatic feedback such as personalized messages or in the form of an alert intended for teaching staff, responsible for intervening with the student in question. These systems are more widespread in the United States.
Current challenges and regulation of AI
Apart from these examples, AI for education is still struggling to establish itself in common uses. “The fact that a digital tool exists and is potentially effective, or even that classrooms are equipped with this tool, is not enough for teachers and students to use it,” says André Tricot, professor of cognitive psychology at Paul Valéry Montpellier 3 University.
In order for teachers and students to be able to trust these technologies and thus adopt them, it is essential that the functioning of the algorithms used be understandable or co-constructed and the systems transparent. Understanding how these tools influence the learning experience and knowledge acquisition is necessary.
Also, AI in education may encounter difficulties in collecting and integrating the data necessary for its operation. In a recommendation system, for example, you have to wait for several recommendations for them to be based on the student's actual level. Before that, the system must be informed with an assumed level, but if this is incorrect, the student risks having a degraded experience (too difficult with the risk of dropping out, too simple with the risk of boredom). This is called the “cold start” problem of these systems, due to the lack of initial data or the absence of previous interaction.
These and other obstacles slow the development and adoption of these systems on a large scale. Furthermore, the legislation on AI at European level, the IA-act, in force from May 2024 and providing a common legal framework for managing the risks linked to AI, classifies AI in education as presenting a high risk.
The impact of AI on learning
Beyond the aspects linked to their practical implementation, it is legitimate to ask whether these systems have a real positive impact on learning. Few studies have been carried out on the question and those that are available concern specific educational contexts, with specific cohorts of students. These experiments are very expensive to set up. It is therefore, for the moment, impossible to generalize their optimistic results to all AI tools in education.
In addition, the evaluation of discrimination potentially present in these systems is a subject of growing importance. Since 2016, numerous ethical scandals have erupted regarding AI used in court (such as COMPAS), for facial recognition (for example, Microsoft, Google), or even for recruitment (notably Amazon). These systems have exhibited a tendency to penalize black people and women in their treatment.
Indeed, the data is collected from a real context, itself containing historical discrimination – biases of gender, origin, social background. Consequently, they also often contain underlying patterns of discrimination. For example, if an AI system, accessible online, relies on data collected in a context where students from disadvantaged backgrounds have historically performed less well due to limited access to the Internet, then the system might incorrectly predict that students from these backgrounds are less likely to succeed, regardless of their actual abilities.
In addition, AI systems are based on models which are, by definition, simplified representations. Biases in these models can also produce harmful consequences for students with certain characteristics. Suppose that a system only takes into account the response speed of students to recommend exercises whose difficulty adapts. A student with a disability might take a little longer to complete the exercises and still answer everything correctly. However, the system will only offer exercises that are too simple, degrading the learning experience for this person.
This is why it is also crucial to evaluate AI systems from the perspective of algorithmic biases and discriminations reproduced or generated, in order to ensure fair and inclusive treatment.
While these technologies have potential to assist teachers, it is clear that significant obstacles remain. However, by ensuring an ethical and inclusive development of these systems, it is possible that they will be increasingly present in the school life of students.
Mélina Verger, PhD in computing, Sorbonne Université
This article has been translated in English from The Conversation under the Creative Commons license. See here to read the original article in French.