Artificial intelligence and natural language processing technologies are driving the use of pedagogical conversational agents with empathetic capabilities. They are virtual tools, including chatbots, which are able to evoke an empathetic reaction in the student while helping them develop their skills.

As they are always available and increasingly effective in providing support for students and teachers, these technologies are growing rapidly, especially in the areas of improving and personalising the online learning experience.

However, given their recent inception, there is as yet no broad-based scientific knowledge about the application of these platforms in education, which is why a study by predoctoral researcher Elvis Ortega-Ochoa, a member of the SMARTLEARN group at UOC, focused on the principles that govern these technologies.

The study, in which UOC postdoctoral researcher Marta Arguedas and member of the university’s Faculty of Computer Science, Multimedia and Telecommunications, Thanasis Daradoumis analysed more than a thousand studies and articles on the subject in order to undertake a scientific review of the most important contributions and draw useful conclusions for their development, such as the design principles to be taken into account when beginning the process for creating these agents.

“Conversational agents must have two of the major skills that teachers put into practice in any teaching and learning process: identifying and regulating emotions by various means, and responding to the student’s emotional state while progressing in the intellectual construction and development of their skills,” Ortega-Ochoa, who is producing his doctoral thesis as part of the Doctoral Programme in Education and ICT (e-Learning), explains.

The study also provides a comprehensive and state-of-the-art overview of the research designs used in the implementation of these agents.

In addition, it examines the factors that influence their effectiveness in education, and also evaluates the types of feedback that improve the impact of empathic agents on learning outcomes.

“With the rise of artificial intelligence and widely used language models like ChatGPT, educational institutions are more willing to experiment in order to incorporate scientific breakthroughs into the institution’s pedagogical model across the board,” Ortega-Ochoa says.

Researchers have analysed more than a thousand studies and articles on the use of pedagogical conversational agents with empathetic capabilities, in order to undertake a scientific review of the most important contributions and draw useful conclusions for their future development.

From chatbots to intelligent tutoring systems

These technological conversational learning tools must enable interaction with the student, either synchronously or asynchronously, and may be integrated into the educational process in various formats and channels: these range from a standalone system, such as a chatbot, to use within an intelligent tutoring system.

“They’re currently being used to develop students’ soft skills and to provide motivation for students when they’re configured with various coaching techniques," Ortega-Ochoa says.

“At certain points in the teaching process, they can also be useful for introducing new topics or reinforcing content that’s already been learned.”

As for their usefulness and students’ perceptions, various studies have shown the effectiveness of these conversational tools in improving motivation and learning performance.

“We’ve seen that these benefits are related to the robustness of the interactions with the tool, so the responsibility for success lies with the development techniques used for these services,” Ortega-Ochoa explains.

“Virtual agents that frequently use artificial intelligence and empathetic capabilities are less monotonous and interrupt conversations to a lesser extent.”

The authors also point out that these benefits may be partially subject to the novelty effect inherent in emerging technologies.

Looking towards the future, specialists in the field anticipate that these agents will further refine the pedagogical and empathetic characteristics presented in the conversations, so that online learning can be more personalised and adapted to students’ needs.

“With the rise of artificial intelligence and widely used language models like ChatGPT, educational institutions are more willing to experiment in order to incorporate scientific breakthroughs into the institution’s pedagogical model across the board, which means we’re likely to have an institutional benchmark in this area in the coming months or within a few years,” Ortega-Ochoa says.

The study makes a significant contribution to providing education and IT professionals with an overview of the latest developments in this field.

It lists the design principles to be taken into account when creating these agents, and highlights the transversality of the empathic component in the overall design of the interaction, the promotion of dialogic learning, proficiency in the field of knowledge and personalised feedback according to the student’s level.

It also shows how the agents were implemented in the learning environments, and provides sufficient factors to take into account when assessing the effectiveness of the design principles.

The study aims to provide a benchmark for educational and technological teams aiming to undertake a project of this type.

It also highlights aspects that can be improved, such as the lack of clarity when previous conversations between agents and learners are added to a database in order to determine learning states and personalise responses during the same session.

The research also discusses ethical considerations related to the use of these agents, and offers some advice for their correct development, such as training the system with unbiased data, ethically managing the information the agent collects, ensuring that its algorithm is inclusive, and preventing it from replicating discriminatory stereotypes.

The researchers also warn that most studies of earlier projects focused on students’ perceptions of the quality, experience and emotional bond generated by the interaction, but few assessed the learning and the level of progress in the development of competencies.

Based on these results, they are now considering the possibility of reviewing the scientific breakthroughs in students’ emotion regulation strategies during their interaction with an empathic pedagogical conversational agent, and undertaking an in-depth review of the development techniques of these agents to determine which is the most viable according to the resources of the educational and technological team.

Here in Australia, students in 16 NSW schools are helping to test in-house AI app NSWEduChat after the state banned ChatGPT over cheating concerns.

The app can only respond to questions it deems relevant to school work and will not provide a full answer but rather ask students follow-up questions intended to encourage them to think critically about a topic.

In further AI developments, the Federal Government has assembled an “artificial intelligence expert group” to tackle the effects of advancing technology and regulate its use.

The group of 12 consists of professors, company directors and lawyers and includes CSIRO Chief Scientist Professor Bronwyn Fox, chair of Australia’s national AI standards committee, Aurélie Jacquet, director of the Australian Institute for Machine Learning at the University of Adelaide, Professor Simon Lucey, and founding co-director of the Centre for AI and Digital Ethics, Professor Jeannie Paterson.

“This Artificial Intelligence Expert Group brings the right mix of skills to steer the formation of mandatory guardrails for high-risk AI settings,” Minister for Industry and Science Ed Husic says.

“With expertise in law, ethics and technology, I’m confident this group will get the balance right.

“It’s imperative sophisticated models underpinning high-risk AI systems are transparent and well tested.”


To access the study, click here.